Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 43 |
Nodes: | 6 (0 / 6) |
Uptime: | 104:22:45 |
Calls: | 290 |
Files: | 905 |
Messages: | 76,612 |
Sooner or later, the pipeline designer needs to recognize the of
occuring
code sequence pictured as::
INST
INST
BC-------\
INST |
INST |
INST |
/----BR |
| INST<----/
| INST
| INST
\--->INST
INST
So that the branch predictor predicts as usual, but DECODER recognizes
the join point of this prediction, so if the prediction is wrong, one
only nullifies the mispredicted instructions and then inserts the
alternate instructions while holding the join point instructions until
the alternate instruction complete.
Yes, compilers often generate such code.
When coding in asm, I typically know at least something about
probability of branches, so I tend to code it differently:
According to Michael S <already5chosen@yahoo.com>:
Yes, compilers often generate such code.
When coding in asm, I typically know at least something about
probability of branches, so I tend to code it differently:
The first version of FORTRAN had a FREQUENCY statement which let you
tell it the relative likelihood of each of the results of a three-way
IF, and the expected number of iterations of a DO loop. It turned
out to be useless, because programmers usually guessed wrong.
The final straw was a compiler where they realized FREQUENCY was
implemented backward and nobody noticed.
Unless you've profiled the code and you data to support your branch
guesses, just write it in the clearest way you can.
According to Michael S <already5chosen@yahoo.com>:
Yes, compilers often generate such code.
When coding in asm, I typically know at least something about
probability of branches, so I tend to code it differently:
The first version of FORTRAN had a FREQUENCY statement which let you
tell it the
relative likelihood of each of the results of a three-way IF, and the expected number of
iterations of a DO loop. It turned out to be useless, because
programmers usually
guessed wrong.
The final straw was a compiler where they realized FREQUENCY was
implemented backward and nobody noticed.
Unless you've profiled the code and you data to support your branch
guesses, just write it in the clearest way you can.
mitchalsup@aol.com (MitchAlsup1) writes:
Sooner or later, the pipeline designer needs to recognize the of
occuring
code sequence pictured as::
INST
INST
BC-------\
INST |
INST |
INST |
/----BR |
| INST<----/
| INST
| INST
\--->INST
INST
So that the branch predictor predicts as usual, but DECODER recognizes
the join point of this prediction, so if the prediction is wrong, one
only nullifies the mispredicted instructions and then inserts the
alternate instructions while holding the join point instructions until
the alternate instruction complete.
Would this really save much? The main penalty here would still be
fetching and decoding the alternate instructions. Sure, the
instructions after the join point would not have to be fetched and
decoded, but they would still have to go through the renamer, which
typically is as narrow or narrower than instruction fetch and decode,
so avoiding fetch and decode only helps for power (ok, that's
something), but probably not performance.
And the kind of insertion you imagine makes things more complicated,
and only helps in the rare case of a misprediction.
What alternatives do we have? There still are some branches that are
hard to predict and for which it would be helpful to optimize them.
Classically the programmer or compiler was supposed to turn
hard-to-predict branches into conditional execution (e.g., someone
(IIRC ARM) has an ITE instruction for that, and My 6600 has something
similar IIRC). These kinds of instructions tend to turn the condition
from a control-flow dependency (free when predicted, costly when mispredicted) into a data-flow dependency (usually some cost, but
usually much lower than a misprediction).
But programmers are not that great on predicting mispredictions (and programming languages usually don't have ways to express them),
compilers are worse (even with feedback-directed optimization as it
exists, i.e., without prediction accuracy feedback), and
predictability might change between phases or callers.
So it seems to me that this is something that the hardware might use
history data to predict whether a branch is hard to predict (and maybe
also taking into account how the dependencies affect the cost), and to
switch between a branch-predicting implementation and a data-flow implementation of the condition.
I have not followed ISCA and Micro proceedings in recent years, but I
would not be surprised if somebody has already done a paper on such an
idea.
- anton
According to Michael S <already5chosen@yahoo.com>:
Yes, compilers often generate such code.
When coding in asm, I typically know at least something about
probability of branches, so I tend to code it differently:
The first version of FORTRAN had a FREQUENCY statement which let you tell it the
relative likelihood of each of the results of a three-way IF, and the expected number of
iterations of a DO loop. It turned out to be useless, because programmers usually
guessed wrong.
The final straw was a compiler where they realized FREQUENCY was implemented backward and nobody noticed.
Unless you've profiled the code and you data to support your branch guesses, just write it in the clearest way you can.