Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 43 |
Nodes: | 6 (0 / 6) |
Uptime: | 94:11:26 |
Calls: | 290 |
Calls today: | 1 |
Files: | 904 |
Messages: | 76,378 |
Bart <bc@freeuk.com> writes:
On 28/11/2024 17:28, Janis Papanagnou wrote:
But we're speaking about compilation times. [...]
You can make a similar argument about turning on the light switch
when entering a room. Flicking light switches is not something you
need to do every few seconds, but if the light took 5 seconds to
come on (or even one second), it would be incredibly annoying.
This analogy sounds like something a defense attorney would say who
has a client that everyone knows is guilty.
On 30.11.2024 00:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 28/11/2024 17:28, Janis Papanagnou wrote:
But we're speaking about compilation times. [...]
You can make a similar argument about turning on the light switch
when entering a room. Flicking light switches is not something you
need to do every few seconds, but if the light took 5 seconds to
come on (or even one second), it would be incredibly annoying.
This analogy sounds like something a defense attorney would say who
has a client that everyone knows is guilty.
Intentionally or not; it's funny to respond to an analogy with an
analogy. :-}
On 28/11/2024 05:18, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 26/11/2024 12:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 25/11/2024 18:49, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
It's funny how nobody seems to care about the speed of
compilers (which can vary by 100:1), but for the generated
programs, the 2:1 speedup you might get by optimising it is
vital!
I think most people would rather take this path (these times
are actual measured times of a recently written program):
compile time: 1 second
program run time: ~7 hours
than this path (extrapolated using the ratios mentioned above):
compile time: 0.01 second
program run time: ~14 hours
I'm trying to think of some computationally intensive app that
would run non-stop for several hours without interaction.
The conclusion is the same whether the program run time
is 7 hours, 7 minutes, or 7 seconds.
Funny you should mention 7 seconds. If I'm working on single
source file called sql.c for example, that's how long it takes for
gcc to create an unoptimised executable:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed. And of course you picked the farthest-most
outlier as your example, grossly misrepresenting any sort of
average or typical case.
It's not atypical for me! [...]
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Bart <bc@freeuk.com> writes:
On 26/11/2024 12:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 25/11/2024 18:49, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
It's funny how nobody seems to care about the speed of
compilers (which can vary by 100:1), but for the generated
programs, the 2:1 speedup you might get by optimising it is
vital!
I think most people would rather take this path (these times
are actual measured times of a recently written program):
compile time: 1 second
program run time: ~7 hours
than this path (extrapolated using the ratios mentioned above):
compile time: 0.01 second
program run time: ~14 hours
I'm trying to think of some computationally intensive app that
would run non-stop for several hours without interaction.
The conclusion is the same whether the program run time
is 7 hours, 7 minutes, or 7 seconds.
Funny you should mention 7 seconds. If I'm working on single
source file called sql.c for example, that's how long it takes for
gcc to create an unoptimised executable:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much. I would think
that in field of compiled-code HDL simulation people are interested
in compilation of as big sources as the can afford.
And of course you picked the farthest-most
outlier as your example, grossly misrepresenting any sort of
average or typical case.
I remember having much shorter file (core of 3rd-party TCP protocol implementation) where compilation with gcc took several seconds.
Looked at it now - only 22 Klocs.
Text size in .o - 34KB.
Compilation time on much newer computer than the one I remembered, with
good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
times longer.
So, in terms of Klock/sec it seems to me that time reported by Bart
is not outrageous. Indeed, gcc is very slow when compiling any source several times above average size.
In this particular case I can not compare gcc to alternative, because
for a given target (Altera Nios2) there are no alternatives.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 30.11.2024 00:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 28/11/2024 17:28, Janis Papanagnou wrote:
But we're speaking about compilation times. [...]
You can make a similar argument about turning on the light switch
when entering a room. Flicking light switches is not something you
need to do every few seconds, but if the light took 5 seconds to
come on (or even one second), it would be incredibly annoying.
This analogy sounds like something a defense attorney would say who
has a client that everyone knows is guilty.
Intentionally or not; it's funny to respond to an analogy with an
analogy. :-}
My statement was not an analogy. Similar is not the same as
analogous.
Michael S <already5chosen@yahoo.com> writes:
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Bart <bc@freeuk.com> writes:
On 26/11/2024 12:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 25/11/2024 18:49, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
It's funny how nobody seems to care about the speed of
compilers (which can vary by 100:1), but for the generated
programs, the 2:1 speedup you might get by optimising it is
vital!
I think most people would rather take this path (these times
are actual measured times of a recently written program):
compile time: 1 second
program run time: ~7 hours
than this path (extrapolated using the ratios mentioned above):
compile time: 0.01 second
program run time: ~14 hours
I'm trying to think of some computationally intensive app that
would run non-stop for several hours without interaction.
The conclusion is the same whether the program run time
is 7 hours, 7 minutes, or 7 seconds.
Funny you should mention 7 seconds. If I'm working on single
source file called sql.c for example, that's how long it takes for
gcc to create an unoptimised executable:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much. I would think
that in field of compiled-code HDL simulation people are interested
in compilation of as big sources as the can afford.
Sure. But Bart is implicitly saying that such cases make up the
bulk of C compilations, whereas in fact the reverse is true. People
don't care about Bart's complaint because the circumstances of his
examples almost never apply to them. And he must know this, even
though he tries to pretend he doesn't.
And of course you picked the farthest-most
outlier as your example, grossly misrepresenting any sort of
average or typical case.
I remember having much shorter file (core of 3rd-party TCP protocol
implementation) where compilation with gcc took several seconds.
Looked at it now - only 22 Klocs.
Text size in .o - 34KB.
Compilation time on much newer computer than the one I remembered, with
good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
times longer.
So, in terms of Klock/sec it seems to me that time reported by Bart
is not outrageous. Indeed, gcc is very slow when compiling any source
several times above average size.
In this particular case I can not compare gcc to alternative, because
for a given target (Altera Nios2) there are no alternatives.
I'm not disputing his ratios on compilation speeds. I implicitly
agreed to them in my earlier remarks. The point is that the
absolute times are so small that most people don't care. For
some reason I can't fathom Bart does care, and apparently cannot
understand why most other people do not care. My conclusion is
that Bart is either quite immature or a narcissist. I have tried
to explain to him why other people think differently than he does,
but it seems he isn't really interested in having it explained.
Oh well, not my problem.
My conclusion is
that Bart is either quite immature or a narcissist.
On 2024-11-16, Stefan Ram wrote:
Dan Purgert <dan@djph.net> wrote or quoted:
if (n==0) { printf ("n: %u\n",n); n++;}
if (n==1) { printf ("n: %u\n",n); n++;}
if (n==2) { printf ("n: %u\n",n); n++;}
if (n==3) { printf ("n: %u\n",n); n++;}
if (n==4) { printf ("n: %u\n",n); n++;}
printf ("all if completed, n=%u\n",n);
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
I honestly lost the plot ages ago; not sure if it was either!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Segfaults? :D
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
oooh, that's way better at making a point of the hazard than mine was.
... almost needed to engage my rubber duckie, before I realized I was >mentally auto-correcting the 'english()' function while reading it.
On 16.11.2024 16:14, James Kuyper wrote:
On 11/16/24 04:42, Stefan Ram wrote:
...
[...]
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
That's indeed a nice example. Where you get fooled by treachery
"trustiness" of formatting[*]. - In syntax we trust! [**]
On 28/11/2024 12:37, Michael S wrote:
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much.
This file mostly comprises sqlite3.c which is a machine-generated amalgamation of some 100 actual C files.
You wouldn't normally do development with that version, but in my
scenario, where I was trying to find out why the version built with my compiler was buggy, I might try adding debug info to it then building
with a working compiler (eg. gcc) to compare with.
Tim isn't asking the right questions (or any questions!). WHY does gcc
take so long to generate indifferent code when the task can clearly be
done at least a magnitude faster?
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
On 30/11/2024 05:25, Tim Rentsch wrote:
Michael S <already5chosen@yahoo.com> writes:
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Bart <bc@freeuk.com> writes:
On 26/11/2024 12:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 25/11/2024 18:49, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
It's funny how nobody seems to care about the speed of
compilers (which can vary by 100:1), but for the generated
programs, the 2:1 speedup you might get by optimising it is
vital!
I think most people would rather take this path (these times
are actual measured times of a recently written program):
compile time: 1 second
program run time: ~7 hours
than this path (extrapolated using the ratios mentioned above): >>>>>>>>
compile time: 0.01 second
program run time: ~14 hours
I'm trying to think of some computationally intensive app that
would run non-stop for several hours without interaction.
The conclusion is the same whether the program run time
is 7 hours, 7 minutes, or 7 seconds.
Funny you should mention 7 seconds. If I'm working on single
source file called sql.c for example, that's how long it takes for
gcc to create an unoptimised executable:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much. I would think
that in field of compiled-code HDL simulation people are interested
in compilation of as big sources as the can afford.
Sure. But Bart is implicitly saying that such cases make up the
bulk of C compilations, whereas in fact the reverse is true. People
don't care about Bart's complaint because the circumstances of his
examples almost never apply to them. And he must know this, even
though he tries to pretend he doesn't.
And of course you picked the farthest-most
outlier as your example, grossly misrepresenting any sort of
average or typical case.
I remember having much shorter file (core of 3rd-party TCP protocol
implementation) where compilation with gcc took several seconds.
Looked at it now - only 22 Klocs.
Text size in .o - 34KB.
Compilation time on much newer computer than the one I remembered, with
good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
times longer.
So, in terms of Klock/sec it seems to me that time reported by Bart
is not outrageous. Indeed, gcc is very slow when compiling any source
several times above average size.
In this particular case I can not compare gcc to alternative, because
for a given target (Altera Nios2) there are no alternatives.
I'm not disputing his ratios on compilation speeds. I implicitly
agreed to them in my earlier remarks. The point is that the
absolute times are so small that most people don't care. For
some reason I can't fathom Bart does care, and apparently cannot
understand why most other people do not care. My conclusion is
that Bart is either quite immature or a narcissist. I have tried
to explain to him why other people think differently than he does,
but it seems he isn't really interested in having it explained.
Oh well, not my problem.
EVERYBODY cares about compilation speeds. Except in this newsgroup where people try to pretent that it's irrelevant.
But then at the same time, they strive to keep those compile-times small:
* By using tools that have themselves been optimised to reduce their runtimes, and where considerable resources have been expended to get the
best possible code, which naturally also benefits the tool
* By using the fastest possible hardware
* By trying to do parallel builds across multiple cores
* By organising source code into artificially small modules so that recompilation of just one module is quicker. So, relying on independent compilation.
* By going to considerable trouble to define inter-dependencies between modules, so that a make system can AVOID recompiling modules. (Why on
earth would it need to? Oh, because it would be slower!)
* By using development techniques involving thinking deeply about what
to change, to avoid a costly rebuild.
Etc.
All instead of relying on raw compilation speed and a lot of those
points become less relevant.
Bart <bc@freeuk.com> wrote:Difficult bugs always occur in larger codebases, but with C, these in a language that I can't navigate, and for programs which are not mine, and
On 28/11/2024 12:37, Michael S wrote:
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much.
This file mostly comprises sqlite3.c which is a machine-generated
amalgamation of some 100 actual C files.
You wouldn't normally do development with that version, but in my
scenario, where I was trying to find out why the version built with my
compiler was buggy, I might try adding debug info to it then building
with a working compiler (eg. gcc) to compare with.
Even in context of developing a compiler I would not run blindly
many compiliations of large file.
At first stage I would debug
compiled program, to find out what is wrong with it.
After that I would try to minimize the testcase, removing code which
do not contribute to the bug.
That involves severla compilations
of files with quickly decreasing sizes.
Tim isn't asking the right questions (or any questions!). WHY does gcc
take so long to generate indifferent code when the task can clearly be
done at least a magnitude faster?
The simple answer is: users tolerate long compile time. If users
abandoned 'gcc' to some other compiler due to long compile time,
then 'gcc' developers would notice.
You need to improve your propaganda for faster C compilers...
Stefan Ram <ram@zedat.fu-berlin.de> wrote:
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
That breaks two rules:
- instructions conditioned by 'if' should have braces,
- when we have the result we should return it immediately.
Once those are fixed code works as expected...
Stefan Ram <ram@zedat.fu-berlin.de> wrote:
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
That breaks two rules:
- instructions conditioned by 'if' should have braces,
- when we have the result we should return it immediately.
On 01.12.2024 13:41, Waldek Hebisch wrote:
Stefan Ram <ram@zedat.fu-berlin.de> wrote:
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
That breaks two rules:
- instructions conditioned by 'if' should have braces,
I suppose you don't mean
if (n == value) { result = string; }
else { result = other; }
which I'd think doesn't change anything. - So what is it?
Actually, you should just add explicit 'else' to fix the problem.
(Here there's no need to fiddle with spurious braces, I'd say.)
- when we have the result we should return it immediately.
This would suffice to fix it, wouldn't it?
Once those are fixed code works as expected...
I find this answer - not wrong, but - problematic for two reasons.
There's no accepted "general rules" that could get "broken"; it's
just rules that serve in given languages and application contexts.
And they may conflict with other "rules" that have been set up to
streamline code, make it safer, or whatever.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 01.12.2024 13:41, Waldek Hebisch wrote:
Stefan Ram <ram@zedat.fu-berlin.de> wrote:
My bad if the following instruction structure's already been hashed
out in this thread, but I haven't been following the whole convo!
In my C 101 classes, after we've covered "if" and "else",
I always throw this program up on the screen and hit the newbies
with this curveball: "What's this bad boy going to spit out?".
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
That breaks two rules:
- instructions conditioned by 'if' should have braces,
I suppose you don't mean
if (n == value) { result = string; }
else { result = other; }
which I'd think doesn't change anything. - So what is it?
Actually, you should just add explicit 'else' to fix the problem.
(Here there's no need to fiddle with spurious braces, I'd say.)
Lack of braces is a smokescreen hiding the second problem.
Or to put if differently, due to lack of braces the code
immediately smells bad.
- when we have the result we should return it immediately.
This would suffice to fix it, wouldn't it?
Yes (but see above).
Once those are fixed code works as expected...
I find this answer - not wrong, but - problematic for two reasons.
There's no accepted "general rules" that could get "broken"; it's
just rules that serve in given languages and application contexts.
And they may conflict with other "rules" that have been set up to
streamline code, make it safer, or whatever.
No general rules, yes. But every sane programmer has _some_ rules.
My point was that if you adopt resonable rules, then whole classes
of potential problems go away.
On 30/11/2024 05:25, Tim Rentsch wrote:[...]
Michael S <already5chosen@yahoo.com> writes:
I remember having much shorter file (core of 3rd-party TCP protocol
implementation) where compilation with gcc took several seconds.
Looked at it now - only 22 Klocs.
Text size in .o - 34KB.
Compilation time on much newer computer than the one I remembered, with
good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
times longer.
So, in terms of Klock/sec it seems to me that time reported by Bart
is not outrageous. Indeed, gcc is very slow when compiling any source
several times above average size.
In this particular case I can not compare gcc to alternative, because
for a given target (Altera Nios2) there are no alternatives.
I'm not disputing his ratios on compilation speeds. I implicitly
agreed to them in my earlier remarks. The point is that the
absolute times are so small that most people don't care. For
some reason I can't fathom Bart does care, and apparently cannot
understand why most other people do not care. My conclusion is
that Bart is either quite immature or a narcissist. I have tried
to explain to him why other people think differently than he does,
but it seems he isn't really interested in having it explained.
Oh well, not my problem.
EVERYBODY cares about compilation speeds. [...]
Bart <bc@freeuk.com> writes:
On 30/11/2024 05:25, Tim Rentsch wrote:
EVERYBODY cares about compilation speeds. [...]
No, they don't. I accept that you care about compiler speed. What
most people care about is not speed but compilation times, and as
long as the times are small enough they don't worry about it.
Another difference may be relevant here. Based on other comments of
yours I have the impression that you frequently invoke compilations interactively. A lot of people never do that (or do it only very
rarely). In a project I am working on now I do builds often,
including full builds where every .c file is recompiled. But all
the compilation times together are only a small fraction of the
total, because doing a build includes lots of other steps, including
running regression tests. Even if the total compilation time were
zero the build process wouldn't be appreciably shorter.
I understand that you care about compiler speed, and that's fine
with me; more power to you. Why do you find it so hard to accept
that lots of other people have different views than you do, and
those people are not all stupid?
Do you really consider yourself
the only smart person in the room?
On 02.12.2024 15:44, Bart wrote:
If all you want is to _sequentially_ process each single error in
a source file you don't need a test; all you need is to get the
error message, to start the editor, edit, and reiterate the compile
(to get the next error message, and so on). - Very time consuming.
But as soon as the errors are [all] fixed in a module... - what
do you do with it? - ...you should test that what you've changed
or implemented has been done correctly.
So edit/compile-iterating a single source is more time-consuming
than fixing it in, let's call it, "batch-mode". And once it's
error-free the compile times are negligible in the whole process.
Or are you suggesting that the scenario portrayed here:
https://xkcd.com/303/
is a complete fantasy?
It is a comic. - So, yes, it's fantasy. It's worth a scribbling
on a WC wall but not suited as a sensible base for discussions.
On 02/12/2024 14:09, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 30/11/2024 05:25, Tim Rentsch wrote:
EVERYBODY cares about compilation speeds. [...]
No, they don't. I accept that you care about compiler speed. What
most people care about is not speed but compilation times, and as
long as the times are small enough they don't worry about it.
Another difference may be relevant here. Based on other comments of
yours I have the impression that you frequently invoke compilations
interactively. A lot of people never do that (or do it only very
rarely). In a project I am working on now I do builds often,
including full builds where every .c file is recompiled. But all
the compilation times together are only a small fraction of the
total, because doing a build includes lots of other steps, including
running regression tests. Even if the total compilation time were
zero the build process wouldn't be appreciably shorter.
But it might be appreciably longer if the compilers you used were a lot slower! Or needed to be invoked more. Then even you might start to care
about it.
You don't care because in your case it is not the bottleneck, and enough
work has been put into those compilers to ensure they are not even slower.
(I don't know why regression tests need to feature in every single build.)
I understand that you care about compiler speed, and that's fine
with me; more power to you. Why do you find it so hard to accept
that lots of other people have different views than you do, and
those people are not all stupid?
You might also accept that for many, compilation /is/ a bottleneck in
their work, or at least it introduces an annoying delay.
Or are you suggesting that the scenario portrayed here:
https://xkcd.com/303/
is a complete fantasy?
Do you really consider yourself
the only smart person in the room?
Perhaps the most impatient.
On 30.11.2024 05:40, Tim Rentsch wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 30.11.2024 00:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 28/11/2024 17:28, Janis Papanagnou wrote:
But we're speaking about compilation times. [...]
You can make a similar argument about turning on the light switch
when entering a room. Flicking light switches is not something you
need to do every few seconds, but if the light took 5 seconds to
come on (or even one second), it would be incredibly annoying.
This analogy sounds like something a defense attorney would say who
has a client that everyone knows is guilty.
Intentionally or not; it's funny to respond to an analogy with an
analogy. :-}
My statement was not an analogy. Similar is not the same as
analogous.
It's of course (and obviously) not the same; it's just a
similar term where the semantics of both terms have an overlap.
(Not sure why you even bothered to reply and nit-pick here.
But with your habit you seem to have just missed the point;
the comparison of your reply-type with Bart's argumentation.)
On Wed, 20 Nov 2024 12:31:35 -0000 (UTC), Dan Purgert wrote:
On 2024-11-16, Stefan Ram wrote:
Dan Purgert <dan@djph.net> wrote or quoted:
if (n==0) { printf ("n: %u\n",n); n++;}
if (n==1) { printf ("n: %u\n",n); n++;}
if (n==2) { printf ("n: %u\n",n); n++;}
if (n==3) { printf ("n: %u\n",n); n++;}
if (n==4) { printf ("n: %u\n",n); n++;}
printf ("all if completed, n=%u\n",n);
above should be equivalent to this
for(;n>=0&&n<5;++n) printf ("n: %u\n",n);
printf ("all if completed, n=%u\n",n);
Well, it's a blue moon when someone nails it. Most of them fall
for my little gotcha hook, line, and sinker.
#include <stdio.h>
const char * english( int const n )
{ const char * result;
if( n == 0 )result = "zero";
if( n == 1 )result = "one";
if( n == 2 )result = "two";
if( n == 3 )result = "three";
else result = "four";
return result; }
void print_english( int const n )
{ printf( "%s\n", english( n )); }
int main( void )
{ print_english( 0 );
print_english( 1 );
print_english( 2 );
print_english( 3 );
print_english( 4 ); }
On 02/12/2024 18:19, Janis Papanagnou wrote:
On 02.12.2024 15:44, Bart wrote:
If all you want is to _sequentially_ process each single error in
a source file you don't need a test; all you need is to get the
error message, to start the editor, edit, and reiterate the compile
(to get the next error message, and so on). - Very time consuming.
But as soon as the errors are [all] fixed in a module... - what
do you do with it? - ...you should test that what you've changed
or implemented has been done correctly.
So edit/compile-iterating a single source is more time-consuming
than fixing it in, let's call it, "batch-mode". And once it's
error-free the compile times are negligible in the whole process.
I've struggled to find a suitable real-life analogy.
All I can suggest is that people have gone to some lengths to justify
having a car that can only travel at 3 mph around town, rather then 30
mph (ie 5 vs 50 kph).
Maybe their town is only a village, so the net difference is neglible.
Or they rarely drive, or avoid doing so, another way to downplay the inconvenience of such slow wheels.
The fact is that driving at 3 mph on a clear road is incredibly
frustrating even when you're not in a hurry to get anywhere!
[...]
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 30.11.2024 05:40, Tim Rentsch wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 30.11.2024 00:29, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 28/11/2024 17:28, Janis Papanagnou wrote:
But we're speaking about compilation times. [...]
You can make a similar argument about turning on the light switch
when entering a room. Flicking light switches is not something you >>>>>> need to do every few seconds, but if the light took 5 seconds to
come on (or even one second), it would be incredibly annoying.
This analogy sounds like something a defense attorney would say who
has a client that everyone knows is guilty.
Intentionally or not; it's funny to respond to an analogy with an
analogy. :-}
My statement was not an analogy. Similar is not the same as
analogous.
It's of course (and obviously) not the same; it's just a
similar term where the semantics of both terms have an overlap.
(Not sure why you even bothered to reply and nit-pick here.
It's because you thought it was just a nit-pick that I bothered
to reply.
But with your habit you seem to have just missed the point;
the comparison of your reply-type with Bart's argumentation.)
If you think they are the same then it is you who has missed the
point.
On 02/12/2024 14:09, Tim Rentsch wrote:
Bart <bc@freeuk.com> writes:
On 30/11/2024 05:25, Tim Rentsch wrote:
EVERYBODY cares about compilation speeds. [...]
No, they don't. I accept that you care about compiler speed.
What most people care about is not speed but compilation times,
and as long as the times are small enough they don't worry about
it.
Another difference may be relevant here. Based on other comments
of yours I have the impression that you frequently invoke
compilations interactively. A lot of people never do that (or do
it only very rarely). In a project I am working on now I do
builds often, including full builds where every .c file is
recompiled. But all the compilation times together are only a
small fraction of the total, because doing a build includes lots
of other steps, including running regression tests. Even if the
total compilation time were zero the build process wouldn't be
appreciably shorter.
But it might be appreciably longer if the compilers you used were
a lot slower! Or needed to be invoked more. [...]
On 01/12/2024 13:04, Waldek Hebisch wrote:
Bart <bc@freeuk.com> wrote:Difficult bugs always occur in larger codebases, but with C, these in a language that I can't navigate, and for programs which are not mine, and which tend to be badly written, bristling with typedefs and macros.
On 28/11/2024 12:37, Michael S wrote:
On Wed, 27 Nov 2024 21:18:09 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
c:\cx>tm gcc sql.c #250Kloc file
TM: 7.38
Your example illustrates my point. Even 250 thousand lines of
source takes only a few seconds to compile. Only people nutty
enough to have single source files over 25,000 lines or so --
over 400 pages at 60 lines/page! -- are so obsessed about
compilation speed.
My impression was that Bart is talking about machine-generated code.
For machine generated code 250Kloc is not too much.
This file mostly comprises sqlite3.c which is a machine-generated
amalgamation of some 100 actual C files.
You wouldn't normally do development with that version, but in my
scenario, where I was trying to find out why the version built with my
compiler was buggy, I might try adding debug info to it then building
with a working compiler (eg. gcc) to compare with.
Even in context of developing a compiler I would not run blindly
many compiliations of large file.
It could take a week to track down where the error might be ...
At first stage I would debug
compiled program, to find out what is wrong with it.
... within the C program. Except there's nothing wrong with the C
program! It works fine with a working compiler.
The problem will be in the generated code, so in an entirely different program.
So normal debugging tools are useful when several sets of
source code are in involved, in different languages, or the error occurs
in the second generation version of either the self-hosted tool, or the program under test if it is to do with languages.
(For example, I got tcc.c working at one point. My generated tcc.exe
could compile tcc.c, but that second-generation tcc.c didn't work.)
After that I would try to minimize the testcase, removing code which
do not contribute to the bug.
Again, there is nothing wrong with the C program, but in the code
generated for it. The bug can be very subtle, but it usually turns out
to be something silly.
Removing code from 10s of 1000s of lines (or 250Kloc for sql) is not practical. But yet, the aim is to isolate some code which can be used to recreate the issue in a smaller program.
Debugging can involve comparing two versions, one working, the other
not, looking for differences. And here there may be tracking statements added.
If the only working version is via gcc, then that's bad news because it
makes the process even more of a PITA.
I added an interpreter mode to my IL, because I assume that would give a solid, reliable reference implementation to compare against.
If turned out to be even more buggy than the generated native code!
(One problem was to do with my stdarg.h header which implements VARARGS
used in function definitions. It assumes the stack grows downwords.
In
my interpreter, it grows downwards!)
That involves severla compilations
of files with quickly decreasing sizes.
Tim isn't asking the right questions (or any questions!). WHY does gcc
take so long to generate indifferent code when the task can clearly be
done at least a magnitude faster?
The simple answer is: users tolerate long compile time. If users
abandoned 'gcc' to some other compiler due to long compile time,
then 'gcc' developers would notice.
People use gcc. They come to depend on its features, or they might use (perhaps unknowingly) some extensions. On Windows, gcc includes some
headers and libraries that belong to Linux, but other compilers don't
provide them.
The result is that if they were to switch to a smaller, faster compiler, their program may not work.
They'd have to use it from the start. But then they may want to use
libraries which only work with gcc ...
You need to improve your propaganda for faster C compilers...
I actually don't know why I care. I get the benefit of my fast tools
every day; they're a joy to use. So I'm not bothered that other people
are that tolerant of slow, cumbersome build systems.
But then, people in this group do like to belittle small, fast products
(tcc for example as well as my stuff), and that's where it gets annoying.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 02.12.2024 19:48, Bart wrote:[...]
All I can suggest is that people have gone to some lengths to justify
having a car that can only travel at 3 mph around town, rather then 30
mph (ie 5 vs 50 kph).
(You certainly meant km/h.)
Both "kph" and "km/h" are common abbreviations for "kilometers per
hour". Were you not familiar with "kph"?
Bart <bc@freeuk.com> wrote:
(For example, I got tcc.c working at one point. My generated tcc.exe
could compile tcc.c, but that second-generation tcc.c didn't work.)
Clear, you work in stages: first you find out what is wrong with second-generation tcc.exe.
In
my interpreter, it grows downwards!)
You probably meant upwards?
And handling such things is natural
when you have portablity in mind, either you parametrise stdarg.h
so that it works for both stack directions, or you make sure that
interpreter and compiler use the same direction (the later seem to
be much easier).
Actually, I think that most natural way is to
have data structure layout in the interpreter to be as close as
possible to compiler data layout.
They'd have to use it from the start. But then they may want to use
libraries which only work with gcc ...
Well, you see that there are reasons to use 'gcc'.
Next version was cross-compiled on Linux using gcc. This version
used inline assembly for rounding and was significantly faster
than what Borland C produced. Note: images to process were
largish (think of say 12000 by 20000 pixels) and speed was
important factor. So using 'gcc' specific code was IMO justified
(this code was used conditionally, other compilers would get
slow portable version using 'floor').
You need to improve your propaganda for faster C compilers...
I actually don't know why I care. I get the benefit of my fast tools
every day; they're a joy to use. So I'm not bothered that other people
are that tolerant of slow, cumbersome build systems.
But then, people in this group do like to belittle small, fast products
(tcc for example as well as my stuff), and that's where it gets annoying.
I tried tcc compiling TeX. Long ago it did not work due to limitations
of tcc. This time it worked. Small comparison on main file (19062
lines):
Command time size code size data
tcc -g 0.017 290521 1188
tcc 0.015 290521 1188
gcc -O0 -g 0.440 248467 14
gcc -O0 0.413 248467 14