I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a determinant). So efficient for speed, not memory. Interestingly for
5x5 matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25
copy operations are offset by faster access later using scalar instead
of indexed array fetch?
I came across some old NASA Fortran subroutines for inverting small matrices. Basically, no loops just long equations for each cell (and a determinant). So efficient for speed, not memory. Interestingly for 5x5 matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy operations are offset by faster access later using scalar instead of
indexed array fetch?
Woozy Song <suzyw0ng@outlook.com> schrieb:
I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a
determinant). So efficient for speed, not memory. Interestingly for 5x5
matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy
operations are offset by faster access later using scalar instead of
indexed array fetch?
I very much doubt this would make a difference for modern optimizing compilers.
Do these routines actually invert matrices? This is rarely needed.
What is the dimension of these matrices?
Thomas Koenig wrote:
Woozy Song <suzyw0ng@outlook.com> schrieb:
I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a
determinant). So efficient for speed, not memory. Interestingly for 5x5
matrix, there are extra lines like
-a-a A11=A(1,1)
-a-a A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy
operations are offset by faster access later using scalar instead of
indexed array fetch?
I very much doubt this would make a difference for modern optimizing
compilers.
Do these routines actually invert matrices?-a This is rarely needed.
What is the dimension of these matrices?
dimensions 3x3 to 6x6. I have seen similar snippets elsewhere, but only
for 3x3 and 4x4. I presume there are niche algorithms that do lots of
small matrix inversions.
So if it doesn't gain anything, I guess it was to make the code text
less unwieldly replacing 6 characters with 3.
Thomas Koenig wrote:
Woozy Song <suzyw0ng@outlook.com> schrieb:
I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a
determinant). So efficient for speed, not memory. Interestingly for 5x5
matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy
operations are offset by faster access later using scalar instead of
indexed array fetch?
I very much doubt this would make a difference for modern optimizing
compilers.
Do these routines actually invert matrices? This is rarely needed.
What is the dimension of these matrices?
dimensions 3x3 to 6x6. I have seen similar snippets elsewhere, but only
for 3x3 and 4x4. I presume there are niche algorithms that do lots of
small matrix inversions.
So if it doesn't gain anything, I guess it was to make the code text
less unwieldly replacing 6 characters with 3.
Woozy Song <suzyw0ng@outlook.com> schrieb:
I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a
determinant). So efficient for speed, not memory. Interestingly for 5x5
matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy
operations are offset by faster access later using scalar instead of
indexed array fetch?
I very much doubt this would make a difference for modern optimizing compilers.
Do these routines actually invert matrices? This is rarely needed.
What is the dimension of these matrices?
Thomas Koenig wrote:
Woozy Song <suzyw0ng@outlook.com> schrieb:
I came across some old NASA Fortran subroutines for inverting small
matrices. Basically, no loops just long equations for each cell (and a
determinant). So efficient for speed, not memory. Interestingly for 5x5
matrix, there are extra lines like
A11=A(1,1)
A12=A(1,2)
then the scalar variables are used thereafter. So I presume the 25 copy
operations are offset by faster access later using scalar instead of
indexed array fetch?
I very much doubt this would make a difference for modern optimizing
compilers.
Do these routines actually invert matrices? This is rarely needed.
What is the dimension of these matrices?
I can see a need for that if matrices were to be used with programs
written in other languages.
Just as a reminder,
a (3,3) matrix would be stored in Fortran as 1,1 2,1 3,1 1,2 2,2
3,2 1,3 2,3 3,3.
in other languages it would be 1,1 1,2 1,3 2,1 2,2 2,3 3,1 3,2 3,3 One of a couple of weird decisions made with the original language specs.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 63 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 492945:17:31 |
| Calls: | 840 |
| Files: | 1,302 |
| D/L today: |
7 files (4,768K bytes) |
| Messages: | 253,410 |