Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 63:48:47 |
Calls: | 633 |
Calls today: | 1 |
Files: | 1,188 |
D/L today: |
32 files (20,076K bytes) |
Messages: | 182,122 |
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
Janis
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more
levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize
"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
On 30/09/2025 17:54, Janis Papanagnou wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
Janis
Optimising compilers can generally handle straight tail recursion well enough. So if you write a simple "fib" function with two recursions,
you'll get one turned into a loop and the other kept as recursion. I
don't think there is a general way to turn multiple recursions into
loops - you have to make a stack-like data structure somewhere along the line, and no C compiler is going to do that automatically.
You might find that compilers like the GHC for Haskell can do better,
since recursion is the bread and butter of functional programming
languages.
On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more
levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize
"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
On 30/09/2025 17:12, Kaz Kylheku wrote:
On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more
levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize
"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
Take a Fib() function using 'if (n<3) return 1; else...' (since there
are a few different ways of doing it).
In that form, evaluating Fib(N) requires 2N-1 nested calls, which is
what makes it a good benchmark. Or it would be if gcc didn't try to cheat.
Compiled with -O3, gcc implements Fib(N) with only 5% of the necessary number of calls. (This makes doesn't make it 20x faster, more like 3x as fast, as there's still a lot going on.)
You can only find out it's 5% by inserting counting code, which can only
be done in the generated assembly, otherwise gcc will ensure the count
has the expected result.
What is normally 25 lines of assembly ends up at over 250 lines. It's a
lot of effort to expend on on 6-line function, which makes you wonder if
it recognises this benchmark and gives it special treatment.
On 30.09.2025 18:12, Kaz Kylheku wrote:
On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more
levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
No. (see below)
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize
"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
To (hopefully) answer your question above and clear up what I
meant (and what I haven't meant)...
I'm not speaking about transformations of recursive calls to
iterative loops (which doesn't reduce algorithmic complexity).
I was rather thinking about transformations like the following
(where fib-1 is the standard form, that you show above as fib)
fib-1:
func __(_){return(_>2?__(--_)+__(--_):1)}
fib-2:
func ___(_) { return __(_,x^x,x^x^x) }
func __(_,_x,x_) { return --_?__(_,x_,_x+x_):_x+x_ }
I apologize for the obfuscated format (it's code I've written
for a fun post in comp.lang.awk), but we can still see the
call structure (I hope). Both forms obey the functional form.
The second, a formally refactored form of the first doesn't
have those (time consuming) *cascaded* calls any more.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...][...]
typedef unsigned int N;
N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }
N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
N fib2(N n) { return fib_aux(n, 1, 0); }
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
}
The second, a formally refactored form of the first doesn't
have those (time consuming) *cascaded* calls any more.
To me, "formally refactored" means there would be a set of formal rules
that can be applied. What rules did you have in mind?
As for your general point, it can't be done in general
or we'd most
likely know that P = NP. On the specific point about ack(), Ackermann devised the function to show that not all total recursive functions are primitive recursive.
On 01.10.2025 11:05, Ben Bacarisse wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...][...]
typedef unsigned int N;
N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }
N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
N fib2(N n) { return fib_aux(n, 1, 0); }
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
}
The second, a formally refactored form of the first doesn't
have those (time consuming) *cascaded* calls any more.
To me, "formally refactored" means there would be a set of formal rules
that can be applied. What rules did you have in mind?
It's expansion, redundancy removal, compression, sub-function matching; actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
in ed. 1984 on page 301. (I won't type it all in here for the post but
if you're interested I can scan the page it and provide a link.)
My doubt was whether these formal rules could be applied in some formal process in a practically sensible time scale.
(But science advances and after these decades also some progress could
have been made in such formal transformations.)
As for your general point, it can't be done in general
Yes, I suspected that. Thanks for the confirmation.
or we'd most
likely know that P = NP. On the specific point about ack(), Ackermann
devised the function to show that not all total recursive functions are
primitive recursive.
Janis
On 01.10.2025 19:01, Janis Papanagnou wrote:
On 01.10.2025 11:05, Ben Bacarisse wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...][...]
typedef unsigned int N;
N fib1(N n) { return n > 2 ? fib1(n-1) + fib1(n-2) : 1; }
N fib_aux(N n, N a, N b) { return n > 1 ? fib_aux(n-1, b, a+b) : a+b; }
N fib2(N n) { return fib_aux(n, 1, 0); }
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
printf("%u\n", fib2(argc > 1 ? atoi(argv[1]) : 40));
}
The second, a formally refactored form of the first doesn't
have those (time consuming) *cascaded* calls any more.
To me, "formally refactored" means there would be a set of formal rules
that can be applied. What rules did you have in mind?
It's expansion, redundancy removal, compression, sub-function matching;
actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
in ed. 1984 on page 301. (I won't type it all in here for the post but
if you're interested I can scan the page it and provide a link.)
See also: Partsch, "Specification and Transformation of Programs"
(Springer, 1990), pages 280ff and 304, with even more formalized
rules (here as well shown based on 'fib' as example).
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 01.10.2025 19:01, Janis Papanagnou wrote:
It's expansion, redundancy removal, compression, sub-function matching;
actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981),
in ed. 1984 on page 301. (I won't type it all in here for the post but
if you're interested I can scan the page it and provide a link.)
See also: Partsch, "Specification and Transformation of Programs"
(Springer, 1990), pages 280ff and 304, with even more formalized
rules (here as well shown based on 'fib' as example).
I no longer have access to an academic library :-( Could you write out
the code in the stages that took you from fib1 to fib2?
On 02.10.2025 01:48, Ben Bacarisse wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 01.10.2025 19:01, Janis Papanagnou wrote:
It's expansion, redundancy removal, compression, sub-function matching; >>>> actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981), >>>> in ed. 1984 on page 301. (I won't type it all in here for the post but >>>> if you're interested I can scan the page it and provide a link.)
http://volatile.gridbug.de/fib_bw.pdf
See also: Partsch, "Specification and Transformation of Programs"
(Springer, 1990), pages 280ff and 304, with even more formalized
rules (here as well shown based on 'fib' as example).
http://volatile.gridbug.de/fib_p.pdf
I no longer have access to an academic library :-( Could you write out
the code in the stages that took you from fib1 to fib2?
It's in document fib_bw.pdf (the first link; in German).
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 02.10.2025 01:48, Ben Bacarisse wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 01.10.2025 19:01, Janis Papanagnou wrote:
It's expansion, redundancy removal, compression, sub-function matching; >>>>> actually this is described (e.g.) in: Bauer, W%ssner (Springer, 1981), >>>>> in ed. 1984 on page 301. (I won't type it all in here for the post but >>>>> if you're interested I can scan the page it and provide a link.)
http://volatile.gridbug.de/fib_bw.pdf
See also: Partsch, "Specification and Transformation of Programs"
(Springer, 1990), pages 280ff and 304, with even more formalized
rules (here as well shown based on 'fib' as example).
http://volatile.gridbug.de/fib_p.pdf
That's about memo-ization, which is not the transformation I was asking about.
I no longer have access to an academic library :-( Could you write out
the code in the stages that took you from fib1 to fib2?
It's in document fib_bw.pdf (the first link; in German).
My German is bad, but the big step seems to be annotated "it is obvious that...". It may be obvious in this case, but there is no hint at what formal rule is being applied, or why one would apply it.
I think I just had a more mechanistic approach in mind when you said "formally refactored".
Thanks for clearing up what transformations you had in mind.