• Optimization of cascading recursions in "C" compilers?

    From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Tue Sep 30 17:54:41 2025
    From Newsgroup: comp.lang.c

    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Sep 30 18:04:21 2025
    From Newsgroup: comp.lang.c

    On 30/09/2025 17:54, Janis Papanagnou wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    Janis

    Optimising compilers can generally handle straight tail recursion well
    enough. So if you write a simple "fib" function with two recursions,
    you'll get one turned into a loop and the other kept as recursion. I
    don't think there is a general way to turn multiple recursions into
    loops - you have to make a stack-like data structure somewhere along the
    line, and no C compiler is going to do that automatically.

    You might find that compilers like the GHC for Haskell can do better,
    since recursion is the bread and butter of functional programming languages.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Sep 30 16:12:14 2025
    From Newsgroup: comp.lang.c

    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 1 00:07:46 2025
    From Newsgroup: comp.lang.c

    On 30.09.2025 18:12, Kaz Kylheku wrote:
    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    No. (see below)

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.

    To (hopefully) answer your question above and clear up what I
    meant (and what I haven't meant)...

    I'm not speaking about transformations of recursive calls to
    iterative loops (which doesn't reduce algorithmic complexity).
    I was rather thinking about transformations like the following
    (where fib-1 is the standard form, that you show above as fib)

    fib-1:
    func __(_){return(_>2?__(--_)+__(--_):1)}

    fib-2:
    func ___(_) { return __(_,x^x,x^x^x) }
    func __(_,_x,x_) { return --_?__(_,x_,_x+x_):_x+x_ }

    I apologize for the obfuscated format (it's code I've written
    for a fun post in comp.lang.awk), but we can still see the
    call structure (I hope). Both forms obey the functional form.

    The second, a formally refactored form of the first doesn't
    have those (time consuming) *cascaded* calls any more. (The
    memoization is implicitly done through function arguments.)

    $ time ksh fib-1 40
    102334155

    real 0m48.69s
    user 0m48.37s
    sys 0m00.21s

    $ time ksh fib-2 40
    102334155

    real 0m00.00s
    user 0m00.00s
    sys 0m00.00s

    My guess would have been that compilers would (usually?) not
    do such transformations in optimizations.

    I understood your reply that you agree with that/my opinion.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 1 00:15:20 2025
    From Newsgroup: comp.lang.c

    On 30.09.2025 18:04, David Brown wrote:
    On 30/09/2025 17:54, Janis Papanagnou wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    Janis

    Optimising compilers can generally handle straight tail recursion well enough. So if you write a simple "fib" function with two recursions,
    you'll get one turned into a loop and the other kept as recursion. I
    don't think there is a general way to turn multiple recursions into
    loops - you have to make a stack-like data structure somewhere along the line, and no C compiler is going to do that automatically.

    Well, a linear-recursion to iterative-loop transformation would
    not reduce algorithmic complexity. So a stack would not address
    the potential goal of my question. (See also my other reply for
    more details/explanations/clarifications concerning my question.)


    You might find that compilers like the GHC for Haskell can do better,
    since recursion is the bread and butter of functional programming
    languages.

    Would the Haskell compilers do such transformations?

    In the mentioned other post I have two variants, both functionally
    formulated. My impression was that these sorts of transformations
    might not lie within the usual refactoring optimizing capabilities
    of common compilers.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bart@bc@freeuk.com to comp.lang.c on Wed Oct 1 00:02:59 2025
    From Newsgroup: comp.lang.c

    On 30/09/2025 17:12, Kaz Kylheku wrote:
    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.


    Take a Fib() function using 'if (n<3) return 1; else...' (since there
    are a few different ways of doing it).

    In that form, evaluating Fib(N) requires 2N-1 nested calls, which is
    what makes it a good benchmark. Or it would be if gcc didn't try to cheat.

    Compiled with -O3, gcc implements Fib(N) with only 5% of the necessary
    number of calls. (This makes doesn't make it 20x faster, more like 3x as
    fast, as there's still a lot going on.)

    You can only find out it's 5% by inserting counting code, which can only
    be done in the generated assembly, otherwise gcc will ensure the count
    has the expected result.

    What is normally 25 lines of assembly ends up at over 250 lines. It's a
    lot of effort to expend on on 6-line function, which makes you wonder if
    it recognises this benchmark and gives it special treatment.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Wed Oct 1 01:22:37 2025
    From Newsgroup: comp.lang.c

    bart <bc@freeuk.com> wrote:
    On 30/09/2025 17:12, Kaz Kylheku wrote:
    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.


    Take a Fib() function using 'if (n<3) return 1; else...' (since there
    are a few different ways of doing it).

    In that form, evaluating Fib(N) requires 2N-1 nested calls, which is
    what makes it a good benchmark. Or it would be if gcc didn't try to cheat.

    gcc simply tries to generate more efficient code.

    Compiled with -O3, gcc implements Fib(N) with only 5% of the necessary number of calls. (This makes doesn't make it 20x faster, more like 3x as fast, as there's still a lot going on.)

    I did not analyze deeply generated code, but AFAICS gcc-12 at -O2 it
    doing only one recursive call (for n - 2) and turns second call into
    iteration. That is it effectively transforms

    long
    fib(int n) {
    if (n < 2) {
    return n;
    }
    return fib(n - 1) + fib(n - 2);
    }

    into

    long
    fib(int n) {
    long acc = 0;
    again:
    if (n < 2) {
    return n + acc;
    }
    acc += fib(n - 1);
    n = n - 2;
    goto again;
    }

    So, I think that it performs essentially the same artithmetic as
    original version, just limiting number of calls. In particular
    abstract complexity is exactly the same, basically just changing
    constant factor.

    In principle gcc could do inlining, transforming fib into

    long
    fin(int n) {
    if (n < 2) {
    return n;
    }
    long r1;
    if ((n - 1) < 2) {
    r1 = n - 1;
    return r1 + fib(n - 2);
    } else {
    r1 = fib(n - 1 - 1) + fin(n - 1 - 2);
    return r1 + fib(n - 2);
    }
    return r1 + fib(n - 2);
    }

    That is it coud expand inline the fin(n - 1) call.

    Then it could notice that n - 1 - 1 is n - 2 and that each call to
    fib gives the same value. It could also simplify other branch of
    if leading to:

    long
    fin(int n) {
    if (n < 2) {
    return n;
    }
    if (n == 2) {
    return 1;
    }
    long r1 = fib(n - 2);
    return r1 + r1 + fin(n - 3);
    }

    This is still exponential, but has lower asymptotic complexity
    than original version.

    You can only find out it's 5% by inserting counting code, which can only
    be done in the generated assembly, otherwise gcc will ensure the count
    has the expected result.

    What is normally 25 lines of assembly ends up at over 250 lines. It's a
    lot of effort to expend on on 6-line function, which makes you wonder if
    it recognises this benchmark and gives it special treatment.

    I do not think so:

    1) Transformation that gcc uses works for largish class of functions,
    such recursive functions are common when doing list processing
    in functional style. For example consider:

    typedef struct gen_lst gen_lst;
    struct gen_lst {gen_lst * next;};
    long
    gen_length(gen_lst * l) {
    if (!l) {
    return 0;
    }
    return 1 + gen_length(l->next);
    }

    gcc-12 at -O2 compiles function above to a loop (there are no
    recursive calls in generated code).

    2) I have an example where gcc spends a lot of time trying to compile
    rather simple and not too big function. Similar (but worse) things
    hapen with version in a different language using a different compiler.
    AFAICS compiler notices that it has essentially linear code and
    tries to collect information useful for oprimization. But it
    gets too much (useless) information, processing it leads to long
    compile time. What this has to do with Fibonacci example?
    Compilers collect a lot of information and based on this decide
    if some transformation should be applied. Tranforming Fibonacci
    function happens because it satisfies conditions, but transformation
    was develeped for different functions.
    3) As ilustrated, there are better transformations, but unlike
    transformation that gcc uses the other transformation are
    probably useless for most real world code.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Wed Oct 1 10:05:29 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 30.09.2025 18:12, Kaz Kylheku wrote:
    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    No. (see below)

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.

    To (hopefully) answer your question above and clear up what I
    meant (and what I haven't meant)...

    I'm not speaking about transformations of recursive calls to
    iterative loops (which doesn't reduce algorithmic complexity).
    I was rather thinking about transformations like the following
    (where fib-1 is the standard form, that you show above as fib)

    fib-1:
    func __(_){return(_>2?__(--_)+__(--_):1)}

    fib-2:
    func ___(_) { return __(_,x^x,x^x^x) }
    func __(_,_x,x_) { return --_?__(_,x_,_x+x_):_x+x_ }

    I apologize for the obfuscated format (it's code I've written
    for a fun post in comp.lang.awk), but we can still see the
    call structure (I hope). Both forms obey the functional form.

    Particularly hard since some newsreaders interpret _text_ as
    underlining. Mine did. Took me a while to see that this was AWK code
    at all! Since this is comp.lang.c, let's have it in C (without the
    decrement operations):

    typedef unsigned int N;

    N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }

    N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
    N fib2(N n) { return fib_aux(n, 1, 0); }

    #include <stdlib.h>
    #include <stdio.h>

    int main(int argc, char **argv)
    {
    printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
    }

    The second, a formally refactored form of the first doesn't
    have those (time consuming) *cascaded* calls any more.

    To me, "formally refactored" means there would be a set of formal rules
    that can be applied. What rules did you have in mind?

    As for your general point, it can't be done in general or we'd most
    likely know that P = NP. On the specific point about ack(), Ackermann
    devised the function to show that not all total recursive functions are primitive recursive.
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 1 19:01:12 2025
    From Newsgroup: comp.lang.c

    On 01.10.2025 11:05, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]
    [...]

    typedef unsigned int N;

    N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }

    N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
    N fib2(N n) { return fib_aux(n, 1, 0); }

    #include <stdlib.h>
    #include <stdio.h>

    int main(int argc, char **argv)
    {
    printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
    }

    The second, a formally refactored form of the first doesn't
    have those (time consuming) *cascaded* calls any more.

    To me, "formally refactored" means there would be a set of formal rules
    that can be applied. What rules did you have in mind?

    It's expansion, redundancy removal, compression, sub-function matching; actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
    in ed. 1984 on page 301. (I won't type it all in here for the post but
    if you're interested I can scan the page it and provide a link.)

    My doubt was whether these formal rules could be applied in some formal
    process in a practically sensible time scale.

    (But science advances and after these decades also some progress could
    have been made in such formal transformations.)


    As for your general point, it can't be done in general

    Yes, I suspected that. Thanks for the confirmation.

    or we'd most
    likely know that P = NP. On the specific point about ack(), Ackermann devised the function to show that not all total recursive functions are primitive recursive.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Wed Oct 1 19:08:58 2025
    From Newsgroup: comp.lang.c

    On 01.10.2025 19:01, Janis Papanagnou wrote:
    On 01.10.2025 11:05, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]
    [...]

    typedef unsigned int N;

    N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }

    N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
    N fib2(N n) { return fib_aux(n, 1, 0); }

    #include <stdlib.h>
    #include <stdio.h>

    int main(int argc, char **argv)
    {
    printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
    }

    The second, a formally refactored form of the first doesn't
    have those (time consuming) *cascaded* calls any more.

    To me, "formally refactored" means there would be a set of formal rules
    that can be applied. What rules did you have in mind?

    It's expansion, redundancy removal, compression, sub-function matching; actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
    in ed. 1984 on page 301. (I won't type it all in here for the post but
    if you're interested I can scan the page it and provide a link.)

    See also: Partsch, "Specification and Transformation of Programs"
    (Springer, 1990), pages 280ff and 304, with even more formalized
    rules (here as well shown based on 'fib' as example).


    My doubt was whether these formal rules could be applied in some formal process in a practically sensible time scale.

    (But science advances and after these decades also some progress could
    have been made in such formal transformations.)


    As for your general point, it can't be done in general

    Yes, I suspected that. Thanks for the confirmation.

    or we'd most
    likely know that P = NP. On the specific point about ack(), Ackermann
    devised the function to show that not all total recursive functions are
    primitive recursive.

    Janis


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Thu Oct 2 00:48:24 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 01.10.2025 19:01, Janis Papanagnou wrote:
    On 01.10.2025 11:05, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]
    [...]

    typedef unsigned int N;

    N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }

    N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
    N fib2(N n) { return fib_aux(n, 1, 0); }

    #include <stdlib.h>
    #include <stdio.h>

    int main(int argc, char **argv)
    {
    printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
    }

    The second, a formally refactored form of the first doesn't
    have those (time consuming) *cascaded* calls any more.

    To me, "formally refactored" means there would be a set of formal rules
    that can be applied. What rules did you have in mind?

    It's expansion, redundancy removal, compression, sub-function matching;
    actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
    in ed. 1984 on page 301. (I won't type it all in here for the post but
    if you're interested I can scan the page it and provide a link.)

    See also: Partsch, "Specification and Transformation of Programs"
    (Springer, 1990), pages 280ff and 304, with even more formalized
    rules (here as well shown based on 'fib' as example).

    I no longer have access to an academic library :-( Could you write out
    the code in the stages that took you from fib1 to fib2?
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Thu Oct 2 05:25:00 2025
    From Newsgroup: comp.lang.c

    On 02.10.2025 01:48, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 01.10.2025 19:01, Janis Papanagnou wrote:

    It's expansion, redundancy removal, compression, sub-function matching;
    actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
    in ed. 1984 on page 301. (I won't type it all in here for the post but
    if you're interested I can scan the page it and provide a link.)

    http://volatile.gridbug.de/fib_bw.pdf

    See also: Partsch, "Specification and Transformation of Programs"
    (Springer, 1990), pages 280ff and 304, with even more formalized
    rules (here as well shown based on 'fib' as example).

    http://volatile.gridbug.de/fib_p.pdf

    I no longer have access to an academic library :-( Could you write out
    the code in the stages that took you from fib1 to fib2?

    It's in document fib_bw.pdf (the first link; in German).

    (The PDF scans need rotating, sorry; HTH).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Bacarisse@ben@bsb.me.uk to comp.lang.c on Fri Oct 3 15:09:47 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 02.10.2025 01:48, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 01.10.2025 19:01, Janis Papanagnou wrote:

    It's expansion, redundancy removal, compression, sub-function matching; >>>> actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981), >>>> in ed. 1984 on page 301. (I won't type it all in here for the post but >>>> if you're interested I can scan the page it and provide a link.)

    http://volatile.gridbug.de/fib_bw.pdf

    See also: Partsch, "Specification and Transformation of Programs"
    (Springer, 1990), pages 280ff and 304, with even more formalized
    rules (here as well shown based on 'fib' as example).

    http://volatile.gridbug.de/fib_p.pdf

    That's about memo-ization, which is not the transformation I was asking
    about.

    I no longer have access to an academic library :-( Could you write out
    the code in the stages that took you from fib1 to fib2?

    It's in document fib_bw.pdf (the first link; in German).

    My German is bad, but the big step seems to be annotated "it is obvious that...". It may be obvious in this case, but there is no hint at what
    formal rule is being applied, or why one would apply it.

    I think I just had a more mechanistic approach in mind when you said
    "formally refactored".

    Thanks for clearing up what transformations you had in mind.
    --
    Ben.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 3 16:47:00 2025
    From Newsgroup: comp.lang.c

    On 03.10.2025 16:09, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 02.10.2025 01:48, Ben Bacarisse wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 01.10.2025 19:01, Janis Papanagnou wrote:

    It's expansion, redundancy removal, compression, sub-function matching; >>>>> actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981), >>>>> in ed. 1984 on page 301. (I won't type it all in here for the post but >>>>> if you're interested I can scan the page it and provide a link.)

    http://volatile.gridbug.de/fib_bw.pdf

    See also: Partsch, "Specification and Transformation of Programs"
    (Springer, 1990), pages 280ff and 304, with even more formalized
    rules (here as well shown based on 'fib' as example).

    http://volatile.gridbug.de/fib_p.pdf

    That's about memo-ization, which is not the transformation I was asking about.

    The Partsch link was meant as an add-on example for a formal deduction specified even in a formal representation. (My impression was that you
    were looking something like that in principle; you were not very clear
    about that.)


    I no longer have access to an academic library :-( Could you write out
    the code in the stages that took you from fib1 to fib2?

    It's in document fib_bw.pdf (the first link; in German).

    My German is bad, but the big step seems to be annotated "it is obvious that...". It may be obvious in this case, but there is no hint at what formal rule is being applied, or why one would apply it.

    This is, IMO, indeed the crucial step! Actually it's formulated (as you noticed) in a way we often hear from mathematicians; "as can obviously
    be seen/derived/etc.", or "trivially we do [this or that]", etc. - For mathematicians I'm sure that's part of their experience and tool-chest
    they pick from when deriving such transformations. The "embedding in a linear-combination" (which is what's done here in the first step) seems
    to be a common method, though.

    That necessary math experience is also the source of my doubts why it's probably not to expect that optimizers in compilers would make use of
    such tools. But I also think that applying these methods "mechanically"
    aren't in principle impossible for expert-systems. (For compilers it's
    probably yet not feasible, though.)


    I think I just had a more mechanistic approach in mind when you said "formally refactored".

    The transformation steps in Bauer/W%ssner are IMO very "mechanistic", formulated like a process. And the involved steps are quite standard.

    The "mechanistic approach" that you (probably) have in mind is what I
    think is more commonly to expect in optimizers; from simple peephole optimizing, loop unrolling, translation of loop invariants, or other syntax-tree reorganizations, and whatnot.

    (But as said, I'm not up to date what's happening in that area, which
    is why I had been asking.)

    Thanks for clearing up what transformations you had in mind.

    Thanks for asking for clarification.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2