Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 59:32:52 |
Calls: | 633 |
Calls today: | 1 |
Files: | 1,188 |
D/L today: |
32 files (20,076K bytes) |
Messages: | 180,583 |
R.Wieser wrote:
Hello all,
I've been working at something to display a 3D environment using OpenGL.
The default projection - with all rotations being zero - seems to be that
I'm looking over the negative Z-axis.
Which seems to indicate that the horizontal rotation (yaw) starts from the negative Z-axis.
This doesn't match the start-of-rotation graphing paper uses, where it
starts from the positive X-axis.
The same, not knowing, problem with the X-axis (pitch) and Z-axis (roll) rotations.
I've been looking near-and-far, high-and-low for information to both of the above (starting angles and matching up OpenGLs projection with standard graphing paper), but have found nothing in either regard.
Question: is there a standard start-of-rotation description available ?
(what I currently have works, but might well not match up with what other programs use)
Secondary question: Should I be trying to match OpenGL's projection with
that of graphing paper (I'm starting to doubt it), and if so how is that
done ?
Remark: pre/post-rotating the camera in such a way that I'm looking over the +X-Axis isn't the problem. Matching up OpenGLs rotations with sin() and cos() is.
Regards,
Rudy Wieser
To answer your first question, there isn't one universal "standard" for starting rotation angles.
In the standard "right-handed" math system you're thinking of, with +Z coming
out of the screen, a zero yaw would indeed point along the positive
X-axis.
For your second question, you're right to start doubting whether you
should fight it. My take is don't try to force OpenGL to be the graphing paper system.
It's a battle that creates more complex code and potential for errors.
What most of us do is just embrace OpenGL's native system for all the internal rendering math. It's what the GPU expects and it's optimized for that.
The real trick, and it sounds like you're already on this path, is to...
create a transformation layer for your own logic.
Then, just before you send everything to OpenGL for drawing, you apply a single, fixed transformation (like a 90-degree rotation) to convert your entire scene from your "math world" into OpenGL's "camera world."
It's a bit of a mental shift, but treating OpenGL's space as a separate destination
for your data rather than the space you work in will save you a ton of
pain.
Hope that helps clear things up a bit.
R.Wieser wrote:
MummyChunk,
To answer your first question, there isn't one universal "standard" for
starting rotation angles.
Thats hard to believe. Somehow all those people all over the world sending each other data regarding to positioning in a 3D environment must have come to some "lets use this" agreement. Thats the kind of information I'm after.
In the standard "right-handed" math system you're thinking of, with +Z
coming
out of the screen, a zero yaw would indeed point along the positive
X-axis.
Thats (part of) the problem : a default camera looks over the -Z axis,
making me expect that a "forward" movement will be in that direction - and not to the right (over the +X axis).
I simply can't wrap my head around that. It looks to me I should *always* add a -90 degree offset to the cameras up-axis rotation to have it match up with the zero yaw angle convention, and that's just silly.
An(y) explanation to how I /should/ be looking at - and dealing with - it would be welcome.
For your second question, you're right to start doubting whether you
should fight it. My take is don't try to force OpenGL to be the graphing
paper system.
Besides the above ? The problem is, I have to. I get 3D data from other sources, and am trying to display it.
It's a battle that creates more complex code and potential for errors.
It does, but barily. Just little adjustments to the code I used for my first version, which used the default OpenGL projection.
What most of us do is just embrace OpenGL's native system for all the
internal rendering math. It's what the GPU expects and it's optimized for
that.
:-) It would be a fools idea not to do that.
The real trick, and it sounds like you're already on this path, is to
create a transformation layer for your own logic.
....
Then, just before you send everything to OpenGL for drawing, you apply a
single, fixed transformation (like a 90-degree rotation) to convert your
entire scene from your "math world" into OpenGL's "camera world."
Yes, I can do and have done so.
But that raises the question : why did OpenGL choose to have its default camera point in a direction that is 90 degrees off the convention that a
zero yaw is over the +X axis ? It doesn't make any sense. :-(
Which is why got thoroughly confused and I asked if there is an universal standard (with "convention" being another word for it). Having a solid starting point (in both of its meanings) always helps.
It's a bit of a mental shift, but treating OpenGL's space as a separate
destination
for your data rather than the space you work in will save you a ton of
pain.
In the end I did just that - but I still have no idea why I had to, or even if I should have.
Hope that helps clear things up a bit.
Alas, no.
And I still have no idea what the starting points and directions of the two other rotation angles are. I've abitrarily chosen them, but have no idea if they align with the standard/convention. :-|
Regards,
Rudy Wieser
Yeah, I hear you on the frustration. It feels like it should be a simple thing that everyone agreed on, and the fact that it isn't is a genuine headache.
When I said there's no one universal standard, I should have been clearer....
The big ones in 3D graphics are right-handed vs. left-handed coordinate systems.
For your other two angles, pitch and roll, it's the same story. The "standard" for pitch is that positive rotation raises the nose (like an airplane pulling up),
and positive roll tilts you clockwise (right wing down).
The decision to have that camera point down -Z by default was likely
because it creates a nice, normalized view volume where the Z coordinate increases as objects get further away from the viewer.
You hitting the -90 degree offset is the exact moment everyone has when...
they try to map a "world" coordinate system onto OpenGL's "camera" system.
The "why" is buried in decades-old architectural decisions. It doesn't
make it less confusing, but you're definitely not alone in having to build that 90-degree bridge.
Hope this sheds a bit more light on the "why" of the mess.
R.Wieser wrote:
MummyChunk,
Yeah, I hear you on the frustration. It feels like it should be a simple
thing that everyone agreed on, and the fact that it isn't is a genuine
headache.
Worse : when I encountered it I started to doubt my sanity. As if I made a silly mistake but could not figure it out. Especially when my 'googeling' didn't return any hits of people dealing with the same.
When I said there's no one universal standard, I should have been clearer. >> ....
The big ones in 3D graphics are right-handed vs. left-handed coordinate
systems.
I forgot all about mentioning that. But as far as I can tell OpenGL uses
the (most used?) right-handed system.
For your other two angles, pitch and roll, it's the same story. The
"standard" for pitch is that positive rotation raises the nose (like an
airplane pulling up),
I initially picked the same as it feels normal. And it luckily also lines up with using the "right hand rule" on the pitch-axis.
Though I had a problem with choosing the starting point. Using straight up or down would be as good as using straight forward. Ultimatily I took straight forward (a pitch of Zero equals flying horizontally), which also seemed to work well when using OpenGLs rotation matrix.
and positive roll tilts you clockwise (right wing down).
Ah yes, exactly the opposite of when you do it in a default OpenGL
projection (with a roll over the Z-axis). :-)
The decision to have that camera point down -Z by default was likely
because it creates a nice, normalized view volume where the Z coordinate
increases as objects get further away from the viewer.
It would work the same for the other two axis. But, a choice has been made. With my current days 20-20 vision into the past, not the most usefull one though.
And there is another convention there : the default projection being used as a top-down, a front or even a side view.
You hitting the -90 degree offset is the exact moment everyone has when
they try to map a "world" coordinate system onto OpenGL's "camera" system. >> ....
The "why" is buried in decades-old architectural decisions. It doesn't
make it less confusing, but you're definitely not alone in having to build >> that 90-degree bridge.
Phew, thanks. As mentioned, I really started to doubt myself.
Hope this sheds a bit more light on the "why" of the mess.
A bit. Decisions that where made with thanwhile knowledge and ideas.. I still do not quite understand how it came to be though. Oh well.
Thanks for the explanation.
Regards,
Rudy Wieser
which usually just means everyone else is using a pre-built engine or library that
handles this conversion under the hood, so they never even see the guts of it.
It's a consistent system, just anchored differently.
Your choice for pitch starting at zero for horizontal flight is the universal standard in aviation and 3D simulations, so you nailed that.
And you spotted the exact thing with the roll being inverted between the
two systems, that's the final piece of the puzzle confirming you're now fully translating between the two mindsets.
There's a compelling theory that early computer graphics were heavily influenced by 2D screen coordinates (X right, Y down) and simply extruding into the screen for Z.
This naturally makes the camera look into the screen, down -Z. It was a logical step from 2D, but one that created this exact legacy we're all dealing with.
It's one of those foundational things that just is what it is now. The important thing is you've worked through it and have a solid, working system.
MummyChunk,
And you spotted the exact thing with the roll being inverted between the
two systems, that's the final piece of the puzzle confirming you're now
fully translating between the two mindsets.
There's a compelling theory that early computer graphics were heavily
influenced by 2D screen coordinates (X right, Y down) and simply extruding >> into the screen for Z.
After having send my previous post I re-realized (my initial post is quite some time ago) that something of the kind was the cause, though I placed my bet on either cartographics or graphing paper (with X=right, Y=up).
Combined with Z coming off of the map/paper it exactly matches OpenGLs initial projection.
[dumbass mode on]
Why didn't the world just adopt OpenGLs coordinate and projection system ? Than I would not have had all these troubles !
[dumbass mode off]
When there is opportunity for two (or more!) ways of doing things
to develop in isolation, you get incompatible standards;
The latter brings hope that one standard (not necessarily the best one) eventually dominates, the former that you have to continue with two.
I suspect your example of 3D presentation/perception is mainly the first
case - the two groups both didn't even think there was anyone else nearly
as advanced as they were;
there may also have been (a) commercial fear [of industrial espionage/intellectual property theft], an (b) disdain [I imagine a lot
of the early work was in gaming,