Understanding DirectX - c#

// Draw primitives
device.VertexFormat = CustomVertex.PositionColored.Format;
device.DrawUserPrimitives(PrimitiveType.TriangleFan, 4, verts);
device.Transform.World = Matrix.RotationY(angle += 0.05f);
// Draw primitives
device.VertexFormat = CustomVertex.PositionColored.Format;
device.DrawUserPrimitives(PrimitiveType.TriangleFan, 4, verts);
device.Transform.World = Matrix.RotationZ(angle += 0.05f);
I can't understand Transform.World.
How I understand (it would be logical) You draw first Triangle and it rotates Y... Then you draw second triangle and it would rotate both triangles to Z-axis.. This code works => One triangle is rotating only Y, second only Z. Why?

The reason that the first triangle is rotated along the Z-axis, is because you set the rotation the previous function, and it is preserved until you change it. After you drew that, you changed it to be a rotation along the Y-axis, and then you draw the second triangle. Then you set the matrix to be a rotation along the Z-axis again, and the whole process starts over.
You are not adding to the rotation that was previously in device.Transform.World, you're simply resetting it. If you want to add them together, you would have to multiply them.

You should perform matrix multpilication if you want to combine multiple transformations. That is, your last command should be device.Transform.World *= Matrix.RotationZ(angle += 0.05f);. Your code simply replaces rotation along Y with rotation along Z-axis. And, btw., you should first apply the tranformation, and only then render primitives (not the other way around).
You should be able to find many detailed transformation tutorials online.

Related

How to remap mouse position to a potentially moving UI element?

I'm working on an RTS game with some pretty extensive UI, so I moved the main camera's output to a quad which only makes up about half the screen, and I'm blitting some UI effects over the rest. My current way of interacting with the game uses unity's Input.mousePosition. When I moved the camera's feed to the quad, obviously those pixel coordinates were distorted, so I fixed them like this:
mapMousePos = (Input.mousePosition * mscaleCorr - mapCorrection * mscaleCorr);
mapCorrection being the pixel offset of the smaller feed, and mscaleCorr being a magic number that got through trial and error — a temporary fix.
Point is, now I'm realizing that running this game at a different resolution will almost certainly break these magic numbers.
What I want mapMousePos to be is what Input.mousePosition was before I moved the gameplay to the small quad - going from (0,0) in the bottom left of the quad to the screen (width, height) in the top right of the quad. This is just so it works with screenToWorld point really nicely on my gameplay camera.
I have the camera-feed quad parented to a full-screen quad, and tried using their relative positions to apply the necessary transformations, but it didn't work, I'm guessing because it's a pixel problem.
I've dug around the docs for a solution using Camera's builtin worldToScreenPoint function, without any luck. I'm sure I'll bump into a fix eventually, but would greatly appreciate any pointers.
Here's what I've come up with; it's stupid, but it works.
I've placed objects at the bottom left and top right of the quad, stored in code as bL and tR.
Then I convert the mousePosition to a worldPosition using ScreenToWorldPoint(), remap it by subtracting the bottom left position, and get it as a percentage across the screen by dividing it by the delta to the top right. Multiply the percentage by the pixel dimensions of the gameplay camera, and voila.
In code, this:
Vector3 wPos = finalcam.ScreenToWorldPoint(Input.mousePosition);
wPos -= bL.transform.position;
mapMousePos = new Vector2(Mathf.Abs(wPos.x), Mathf.Abs(wPos.y));
mapMousePos = new Vector2(
mapMousePos.x / (tR.transform.position.x -bL.transform.position.x),
mapMousePos.y / (tR.transform.position.y - bL.transform.position.y));
mapMousePos = new Vector2(mapMousePos.x * Camera.main.pixelWidth, mapMousePos.y * Camera.main.pixelHeight);
Again, it's dumb, but it seems to work. I'm leaving this up in case anybody knows a cleaner method.

OpenGL - Cannot get cumulative rotation around correct axis and origin

I am implementing an Arcball rotation in an existing project I wrote several years ago, using OpenTK & C#, and have got stuck at the final hurdle.
This is 'old-style' non-shader OpenGL. I am confident that the Arcball rotation is working correctly, the problem is just applying the resulting matrix. It should be fairly straightforward but it isn't working out that way.
I get the Arcball rotation as a quaternion (qCurrent) then convert that to a matrix. I have then tried a couple of approaches:
Simply apply that as an additional rotation to the existing scene:
GL.PushMatrix();
Matrix4 arcball_rot = Matrix4.CreateFromQuaternion(qCurrent);
GL.MultMatrix(ref arcball_rot);
... render objects in scene
GL.PopMatrix();
This applies the correct rotation around the correct origin (the camera target), but with respect to the world coordinate system, not the viewer. This makes sense because I am effectively applying the rotations in the wrong sequence.
Apply the rotations in sequence, with the arcball rotation first, which means starting from scratch. Here, rtn_complete is a matrix storing the modelview matrix as initialised, before the Arcball rotation:
Matrix4 modelview = Matrix4.LookAt(camera.camerapos, camera.cameratarget, camera.cameraup);
rtn_complete = modelview;
Then...
Matrix4 mm = Matrix4.CreateFromQuaternion(qCurrent) * rtn_complete;
GL.LoadIdentity();
GL.MultMatrix(ref mm);
This applies the correct rotation wrt the viewer, but around the wrong centre of rotation. It is a point a long way away from the camera target.
Obviously this approach needs a forward/backward translation either side of the rotation, but I have tried pretty much every possible combination of these and none of them work.
Camera and target positions are as follows :-
camera.camerapos = (-51.3, -67.9, 37.7), and
camera.cameratarget = (0.0, 0.6, 7.3)
When I add translations as below (and countless other permutations) I still get the scene rotating around the wrong origin.
GL.LoadIdentity();
GL.Translate(camera.cameratarget);
GL.MultMatrix(ref arcball_rot);
GL.Translate(-camera.cameratarget);
GL.MultMatrix(ref rtn_complete);
My feeling is that the root of the problem is likely to be the translation applied by using LookAt, which I am not taking into account when doing the above transformations.
However, when I check the rtn_complete matrix (which is the modelview matrix following the LookAt) the fourth column does not contain a translation. The matrix looks like this:
0.80, 0.20, -0.56, 0.00
-0.60, 0.27, -0.75, 0.00
0.00, 0.94, 0.34, 0.00
0.38, -7.02, -92.8, 1.00
I would have expected to a see a translation here.
EDIT 1:
Found it eventually. I was on the right track with my suspicion about the translation resulting from using Matrix4.LookAt().
The way I had listed the matrices for debugging resulted in a transpose, so the translation was there for me to see but I was missing it but it was in the fourth row not the fourth column. The translation is (0.38, -0.72, -92.8) Applying this translation on either side of the Arcball rotation results in the expected rotation behaviour, initially.
GL.LoadIdentity();
GL.Translate(0.38, -7.02, -92.8);
GL.MultMatrix(ref mm);
GL.Translate(-0.38, 7.02, 92.8);
GL.MultMatrix(ref rtn_complete);
EDIT 2:
Having worked out the above I am very close, but it's still not quite right when I move somewhere else in the scene. Again I have got a couple of problems, and can solve one or the other but not both simulataneously.
Because of the Matrix4.LookAt I separated out the translation and the rotation components of the rtn_complete matrix (respectively 'rtn_complete_trans' and 'rtn_complete_rot').
If I do this, the rotations are spot on:
GL.LoadIdentity();
GL.Translate(rtn_complete_trans); // modelview translation component
GL.Translate(camera.cameratarget); // translate to rotation centre
GL.MultMatrix(ref arcball_rot); // ongoing Arcball rotation
GL.MultMatrix(ref rtn_complete_rot); // modelview rotation component
GL.Translate(-camera.cameratarget); // translate back from rotation centre
But there is an unwanted translation each time the view is initialised. If I rotate, release and repeat a few times the object gradually moves off screen.
If I change the position of the second camera translation this unwanted shift no longer happens, but the centre of rotation is off:
GL.LoadIdentity();
GL.Translate(rtn_complete_trans); // modelview translation component
GL.Translate(camera.cameratarget); // translate to rotation centre
GL.MultMatrix(ref arcball_rot); // ongoing Arcball rotation
GL.Translate(-camera.cameratarget); // translate back from rotation centre
GL.MultMatrix(ref rtn_complete_rot); // modelview rotation component
Can anyone explain?

Interpolate Up Vector Linearally when Up isn't truly Up?

Overview
I've been looking around for a while and haven't found an answer so hopefully the community here can help me out. I am re-working my look-at camera (written pre 2000) and am having trouble getting rid of an issue where the look-at and up vectors become aligned causing the camera to spin wildly out of control. I originally understood this to be gimbal lock, but now I'm not so sure of that.
From my understanding of gimbal lock, when pitch becomes aligned with roll, pitch becomes roll; and in essence this is what it appears to be, but the problem is that the rate of change shouldn't increase just because the axes become aligned, I should just get a smooth roll. Instead I get a violent roll in which I can't really tell which way the roll is going.
Updating the Camera's Position
When the user moves the mouse I move the camera based on the mouse's X and Y coordinates:
Vector2 mousePosition = new Vector2(e.X, e.Y);
Vector2 delta = mousePosition - mouseSave;
mouseSave = mousePosition;
ShiftOrbit(delta / moveSpeed);
Within the ShiftOrbit method, I calculate the new position based on the look-at, right, and up vectors in relationship to the delta passed from the mouse event above:
Vector3 lookAt = Position - Target;
Vector3 right = Vector3.Normalize(Vector3.Cross(lookAt, Up));
Vector3 localUp = Vector3.Normalize(Vector3.Cross(right, lookAt));
Vector3 worldYaw = right * delta.X * lookAt.Length();
Vector3 worldPitch = localUp * delta.Y * lookAt.Length();
Position = Vector3.Normalize(Position + worldYaw + worldPitch) * Position.Length();
This works smoothly as it should and moves the camera around its target in any direction of my choosing.
The View Matrix
This is where I experience the problem mentioned in the overview above. My Up property was previously set to always be 0, 0, 1 due to my data being in ECR coordinates. However, this is what causes the axis alignment as I move the camera around and the view matrix is updated. I use the SharpDX method Matrix.CreateLookAtRH(Position, Target, Up) to create my view matrix.
After discovering that the Up vector used when creating the view matrix should be updated instead of always being 0, 0, 1, I encountered another issue. I now caused roll when yaw and pitch were introduced. This shouldn't occur due to a requirement so I immediately began pursing a fix.
Originally I performed a check to see if was coming close to being axis aligned, if I was, then I set the Up used to create my view matrix to the local up of the camera, and if I wasn't then I used only the Z axis of the local up to ensure that up was either up or down.
float dot = Math.Abs(Vector3.Dot(Up, Position) / (Up.Length() * Position.Length()));
if (dot > 0.98)
Up = localUp;
else
Up = new Vector3(0, 0, localUp.Z);
However, this was a bit jumpy and still didn't seem quite right. After some trial and error, along with some extensive research on the web trying to find potential solutions, I remembered how linear interpolation can transition smoothly from one value to another over a period of time. I then moved to using Vector3.Lerp instead:
float dot = Math.Abs(Vector3.Dot(Up, Position) / (Up.Length() * Position.Length()));
Up = Vector3.Lerp(new Vector3(0, 0, localUp.Z), localUp, dot);
This is very smooth, and only causes any roll when I am very near to being axis aligned which isn't enough to be noticeable by the every day user.
The Problem
My camera also has the ability to attach to a point other than 0, 0, 0, and in this case, the up vector for the camera is set to the normalized position of the target. This causes the original issue in the overview when using Vector3.Lerp as above; so, in the case where my camera is attached to a point other than 0, 0, 0 I do the following instead:
Up = Vector3.Lerp(Vector3.Normalize(Target), localUp, dot);
However, even this doesn't work and I have no idea how to get it to do so. I've been working at this problem for a few days now and have made an extensive effort to fix it, and this is a big improvement so far.
What can I do to prevent the violent spinning using Vector3.Lerp when the up isn't equivalent to 0, 0, z?
Imagine a vertical plane that is rotated around the vertical axis by yaw (ϕ):
The camera is only allowed to rotate with the plane or in the plane, its in-plane orientation given by the pitch (θ):
ϕ and θ should be stored and incremented with the input delta. With this setup, the camera will never tilt, and the local up direction can always be computed:
d and u are the local front and up directions respectively, and are always perpendicular (so alignment won't be an issue). The target can of course be taken as the position + d.
But wait, there's a catch.
Suppose if you move your mouse to the right; ϕ increases, and you observe:
If the camera is upright, the view rotates to the right.
If the camera is upside-down, the view rotates to the left.
Ideally this should be consistent regardless of the vertical orientation.
The solution is to flip the sign of increments to ϕ when the camera is upside down. One way would be to scale the increments by cos(θ), which also smoothly reduces the sensitivity as θ approaches 90 / 270 degrees so that there is no sudden change in horizontal rotational direction.

Scale Sprite up and Down to give illusion of a jump

I have some code that I wrote that works, but I feel it could be better and wanted to get some feedback.
The goal I had is to have a Sprite Scale up and back down in a timely fashion when a button is pushed so that it gives the illusion of jumping in a "Top Down" view of the game. Like the character is jumping off the screen. I already know how to draw scaled images I'm more interested in the logic of the timing aspect.
This works, just not sure it's the best. Thought maybe there was some equation, a math friend told me maybe a linear equation or like a parabola or second order equation. Not being great with math.
Anyway.
Class Properties
private double _jumpingStartedAt;
private double _totalJumpTimeInSeconds = 0.7;
private double _totalJumpFrames = 14;
private double _timeSinceLastScale;
private double _jumpingHalfWayAt;
When button is pushed for the first time I start the "Jump Logic". This runs once per jump. My thought was that I'd mark the "start" time and determine the "halfway" time by the totalJumpTimeInSeconds.
_jumpingStartedAt = gameTime.TotalGameTime.TotalSeconds;
_jumpingHalfWayAt = _jumpingStartedAt + MillisecondsBetweenFrame() * (_totalJumpFrames / 2);
And then this is run on each Update() until my "jump" is complete or isJumping = false. The logic here is that I would scale up every 1 "frame" until half way point then scale back down.
_timeSinceLastScale += gameTime.ElapsedGameTime.TotalSeconds;
if (_timeSinceLastScale > MillisecondsBetweenFrame() && gameTime.TotalGameTime.TotalSeconds < _jumpingHalfWayAt)
{
Scale += 0.2f;
_timeSinceLastScale = 0;
}
else if (gameTime.TotalGameTime.TotalSeconds > _jumpingHalfWayAt)
{
Scale -= 0.2f;
if (Scale < 1.0) Scale = 1; //probably don't need this was worried if it went passed 0
if (Scale == 1.0) _isJumping = false;
}
private double SecondsBetweenFrame()
{
return _totalJumpTimeInSeconds / this._totalJumpFrames;
}
Now this works, but seems a little convoluted to me.
Stretching image when jumping - side view
Yeah, it's pretty complicated, what you created.
I assume your sprite is also moving up and down when jumping. That you have some sort of Vector2 velocity, which you change by dv = gravityAcceleration * dt in every update, and so you change Vector2 position by dp = velocity * dt. If so, I would rather use my velocity.Y value to calculate how the sprite should stretch. I think it's more natural. And your code will become much more simple.
Here's an image to describe better what I mean:
However, you can probably face the other problem here: just at the beginning of the jump your sprite will suddenly get high velocity, when still being near the ground, which can cause it to cross through the floor for a moment. To prevent that you can artificially move your sprite upwards by the smallest needed value for the time of jump. The problem is described by the image below:
As you can very well see, the first stretched ball moved upwards a little bit, but not enough. You have to calculate difference between sizes before and after stretching and then move your sprite up by that distance.
If you do it like that, your Update should shorten to just a few lines. I believe you can do simple calculations on your own.
Easier approach
...Unless you'd rather like your sprite behave like you want. Then you could modify scale according to your Y position:
if (KeyboardState.IsKeyDown(Keys.Space))
{
isJumping = true;
jumpStartPosition = Position;
}
if (!isJumping) Scale = 1f;
else
{
Scale = StretchFactor * (Position.Y - jumpStartPosition.Y);
}
where:
- isJumping is a bool,
- jumpStartPosition is a Vector2,
- Position is a Vector2 property of your sprite,
- StretchFactor is a float property of your sprite telling how much does it stretch.
And you also need to have end-of-jump condition - for example when the sprite's Position.Y becomes smaller than the jumpStartPosition.Y. But generally this solution (as well as yours) has one disadvantage - there will be problems, if you will want to start jump from one height, and end it on another:
so I would rather recommend my first solution. There you can make stop-jump condition by collision check.
Stretching image when jumping - top-down view
Bummer. Since originally it wasn't specified that it is a top-down game, like those first GTA's, I really misunderstood the question, so the answer doesn't fit much. So the answer goes now.
If you wan't it to be realistic, you should use some basic principles of perspective. As we look at the character jumping from the top, it goes closer to us, so it's image grows. Why's that? Look at the pic below.
There are two things, that are needed for perspective to work: the center of perspective and the screen. The center of perspective is the point, where all "rays" are crossing. "Ray" is a line from the any point in the world to the center of our eye. Now the screen is the plane, where image of 3d world is being created. The points of the real world are being cast into screen along their rays. Of course your game is pseudo-3d, but it shouldn't matter in that case.
When z grows, sprite comes closer to the center of perspective. If you imagine ray from the center of perspective to the edge of the sprite, the angle of ray changes, as it's distance to the center of perspective becomes lesser. And the change of angle makes the point's image on the screen moving. That's why image grows, or becomes smaller.
Now we can wonder: ok, how now put this into numbers? Look at the picture below:
I deliberately translated whole world by -C so the z coord of the center of perspective could become 0. That makes calculations simplier. What are we trying to find, is the x' - coord of the point on the screen. Let the Z* = |z - C|. If we look at this picture it becomes clear, that we can find what we need by pretty simple proportion:
Using the same method you can calculate y'. If your character is always at the center of the screen, all that you need will be x'/x = y'/y = S, i.e. your scale. That's because x in this scenario is, in fact, the half-width of the sprite, and y is the half-height. However, if your character will be able to move freely around the screen, you may want to scale & translate it, so it would be more natural:
The white square is the on-the-ground sprite, the gray square is the jumping sprite. In this case you will have to know l (left), r (right), t (top) and b (bottom) coords of the sprite's boundaries (top-bottom means Y-axis, not Z-axis). Then using the same proportion you can get l', r', t' and b' - boundaries of the sprite's image on screen. From this data you should be able to calculate both scale and translation.
Note: L is the parameter of our calculation which you have to choose yourself. Assuming, that the screen has constant width Ws and height Hs, L strictly corresponds with FOV (field of view). You can acquire it also using proportions. So L = (cos(FOV/2) * Ws)/2. I would recommend FOV = 60 deg. If you will make FOV too big, you may face the fisheye problem.

xna make an object move around ITS axis(not around point 0,0,0)

How can I make an object rotate around its axis? i.e. having the moon rotating BOTH around point 0, 0, 0 and its own axis? So far I have only been able to do the point 0, 0, 0 point by using the gametime component and creating a rotation matrix.
Transpose the object's center to (0,0,0), do the rotation, and transpose back.
Let's say we have the following:
class 2DMoon
{
Texture2D texture;
Vector2 axis;
Vector2 origin;
}
The origin point could be (0,0), but let's say it's something more complex--something like (29,43). Now let's say the texture's width is 50 and the height is 90.
To get the axis for the texture for it to rotate around, assuming you want the center, you would do the following (assuming the origin (ie. current position) and texture are loaded):
axis.X = (.5 * texture.Width);
axis.Y = (.5 * texture.Height);
As you know, that would take make the axis a vector of (25,45).
As BlueRaja states above, you could then make a method that looks like this:
Rotate()
{
origin.X -= axis.X;
origin.Y -= axis.Y;
// rotation goes here
origin.X += axis.X;
origin.Y += axis.y;
}
This should work for any sort of standard texture. (And of course, you don't HAVE to have the Vector2 I made up called "axis"--it's just for easy reference.
Now, take the same logic and apply it for the 3D.
A word of advice: if you are trying to work through logic in 3D, look at the logic in 2D first. 9 times out of 10, you'll find the answer you're looking for!
(If I made any mistake during the transposition in my Rotate() method, please let me know--I'm sort of tired where I'm at, and I'm not testing it, but the rotation should work like that, no?)

Categories

Resources