I have a simple square I'm drawing in 3D space using Direct3D 11 and SlimDX, with the following coordinates (I know it renders)
0,0,0.5
0,0.5,0.5
0.5,0.5,0.5
0.5,0,0.5
I have a camera class that handles camera movement by applying matrix transformations to the viewmatrix. e.g. Camera.MoveForward(float x) moves the camera forward by x. It also holds the View and Projection matrices. I instantiate them using the following code:
Matrix view = Matrix.LookAtLH(
new Vector3(0f, 0f, -5f),
new Vector3(0f, 0f, 0f),
new Vector3(0f, 1f, 0f));
Matrix projection = Matrix.PerspectiveFovLH(
(float)Math.PI/2,
WINDOWWIDTH/WINDOWHEIGHT,
0.1f,
110f);
The world matrix is set to the identity matrix.
In my shader code I transform my coordinates using the following code:
PS_IN VS(VS_IN input)
{
PS_IN output = (PS_IN)0;
output.pos = mul( input.pos, WorldViewProjection );
output.col = float4(1,1,1,1);
return output;
}
Where WorldViewProjection is set in my code using the following:
Matrix worldview = Matrix.Multiply(world, camera.ViewMatrix);
Matrix worldviewprojection = Matrix.Multiply(worldview, camera.ProjectionMatrix);
I know the camera class's transformations are working as it's old code I wrote for an MDX application which worked fine, however when this runs I see my square, and moving forwards and backwards works just fine, but moving left and right seems to rotate the square around the Y (up) axis instead of translating it, moving up and down has a similar effect, rotating instead about the X (horizontal) axis.
For reference, the camera movement functions:
public void MoveRight(float distance)
{
position.Z += (float)(Math.Sin(-angle.Y) * -distance);
position.X += (float)(Math.Cos(-angle.Y) * -distance);
updateViewMatrix();
}
public void MoveForward(float distance)
{
position.X += (float)(Math.Sin(-angle.Y) * -distance);
position.Z += (float)(Math.Cos(-angle.Y) * -distance);
updateViewMatrix();
}
public void MoveUp(float distance)
{
position.Y -= distance;
updateViewMatrix();
}
private void updateViewMatrix()
{
ViewMatrix = Matrix.Translation(position.X, position.Y, position.Z) * Matrix.RotationY(angle.Y) * Matrix.RotationX(angle.X) * Matrix.RotationZ(angle.Z);
}
The issue may lie in the line:
output.pos = mul( input.pos, WorldViewProjection );
I would change output.pos to be a float4 and then do:
output.pos = mul( float4(input.pos, 1), WorldViewProjection );
Let me know how this works out. If this isn't the problem, I'll take a deeper look at what may be going on.
Related
I am trying to implement object picking in OpenGL using C# and OpenTK. I have written a class for this purpose based on two sources:
OpenGL ray casting (picking): account for object's transform
https://www.bfilipek.com/2012/06/select-mouse-opengl.html
Currently my code is only for calculating the distance of the mouse pointer from an arbitrary test coordinate of (0,0,0), but once working it would not take much to iterate through objects in a scene to find a match.
The method is to define a ray underneath the mouse pointer between the near and far clipping planes. Then find the point on that ray which is closest to the point being tested and return the distance between the two. This should be zero when the mouse pointer is directly over (0,0,0) and increase as it moves away in any direction.
Can anyone help troubleshoot this? It executes without errors but the distance being returned clearly isn't correct. I understand the principles but not the finer points of the calculations.
Although I have found various examples online which almost do it, they are generally in a different language or framework and/or use deprecated methods and/or are incomplete or not working.
public class ObjectPicker{
public static float DistanceFromPoint(Point mouseLocation, Vector3 testPoint, Matrix4 modelView, Matrix4 projection)
{
Vector3 near = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 0), modelView, projection); // start of ray
Vector3 far = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 1), modelView, projection); // end of ray
Vector3 pt = ClosestPoint(near, far, testPoint); // find point on ray which is closest to test point
return Vector3.Distance(pt, testPoint); // return the distance
}
private static Vector3 ClosestPoint(Vector3 A, Vector3 B, Vector3 P)
{
Vector3 AB = B - A;
float ab_square = Vector3.Dot(AB, AB);
Vector3 AP = P - A;
float ap_dot_ab = Vector3.Dot(AP, AB);
// t is a projection param when we project vector AP onto AB
float t = ap_dot_ab / ab_square;
// calculate the closest point
Vector3 Q = A + Vector3.Multiply(AB, t);
return Q;
}
private static Vector3 UnProject(Vector3 screen, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(modelView) * projection );
Vector3 pos_out = new Vector3(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
}
It is called like this:
private void GlControl1_MouseMove(object sender, MouseEventArgs e)
{
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model, projection);
this.Text = dist.ToString(); // display in window caption for debugging
}
I know how the matrices are being passed in (as per above code). I'm fairly sure that the contents of those matrices must be correct, since the rendering works fine, and I can rotate/zoom successfully. This is the vertex shader FWIW:
string vertexShaderSource =
"# version 330 core\n" +
"layout(location = 0) in vec3 aPos;" +
"layout(location = 1) in vec3 aNormal;" +
"uniform mat4 model; " +
"uniform mat4 view;" +
"uniform mat4 projection;" +
"out vec3 FragPos;" +
"out vec3 Normal;" +
"void main()" +
"{" +
"gl_Position = projection * view * model * vec4(aPos, 1.0);" +
"FragPos = vec3(model * vec4(aPos, 1.0));" +
"Normal = vec3(model * vec4(aNormal, 1.0))";
"}";
I use an implementation of Arcball for rotation. Zooming is done using a translation, like this:
private void glControl1_MouseWheel(object sender, System.Windows.Forms.MouseEventArgs e)
{
zoom += (float)e.Delta / 240;
view = Matrix4.CreateTranslation(0.0f, 0.0f, zoom);
SetMatrix4(Handle, "view", view);
glControl1.Invalidate();
}
Each vertex coordinate is transformed by the model view matrix. This transforms the coordinates from model space to view space. Then each vertex coordinate is transformed by the projection matrix. This transforms from view space to clip space. The perspective divide converts a clip space coordinate to normalized device space.
If you want to convert from normalized device space to model space you have to do the reverse operations. That means you have to transform by the inverse projection matrix and the inverse model view matrix:
Vector4 pos2 = Vector4.Transform(pos, Matrix4.Invert(projection) * Matrix4.Invert(modelView));
respectively
Vector4 pos2 = Vector4.Transform(pos, Matrix4.Invert(modelView * projection));
Note, that OpenTK matrices have to be multiplied from the left to the right. See the answer to OpenGL 4.2 LookAt matrix only works with -z value for eye position.
Answering my own question here so that I can post the working code for the benefit of other users, but at least half the answer was provided by Rabbid76, whose help I am very grateful for.
There were two errors in my original code:
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(modelView) * projection );
where the two matrixes were multiplied in the wrong order, and the projection matrix was not inverted.
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model, projection);
where I passed in the model matrix not the modelview matrix (which is the product of the model and view matrices).
This works:
private void GlControl1_MouseMove(object sender, MouseEventArgs e)
{
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model * view, projection);
// do something with the result
}
public class ObjectPicker{
public static float DistanceFromPoint(Point mouseLocation, Vector3 testPoint, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector3 near = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 0), modelView, projection); // start of ray (near plane)
Vector3 far = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 1), modelView, projection); // end of ray (far plane)
Vector3 pt = ClosestPoint(near, far, testPoint); // find point on ray which is closest to test point
return Vector3.Distance(pt, testPoint); // return the distance
}
private static Vector3 ClosestPoint(Vector3 A, Vector3 B, Vector3 P)
{
Vector3 AB = B - A;
float ab_square = Vector3.Dot(AB, AB);
Vector3 AP = P - A;
float ap_dot_ab = Vector3.Dot(AP, AB);
// t is a projection param when we project vector AP onto AB
float t = ap_dot_ab / ab_square;
// calculate the closest point
Vector3 Q = A + Vector3.Multiply(AB, t);
return Q;
}
private static Vector3 UnProject(Vector3 screen, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(projection) * Matrix4.Invert(modelView) );
Vector3 pos_out = new Vector3(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
}
Since posting this question I have learned that the method is generally called ray casting, and have found a couple of excellent explanations of it:
Mouse Picking with Ray Casting by Anton Gerdelan
OpenGL 3D Game Tutorial 29: Mouse Picking by ThinMatrix
So I've been trying to get this to work but no luck so far, hopefully you can help. The thing is I have a camera in my project that the user can freely move with mouse and buttons.
Currently like so:
move = new Vector3(0, 0, -1) * moveSpeed;
move = new Vector3(0, 0, 1) * moveSpeed;
...
And then I just add move vector to cameraPos vector: cameraPos += move
Then problem is if I rotate the camera and then try to move, for example down, it will not move straight down but in a certain angle. I am assuming this is due to moving on local axis. But what I want to do is to move on a world axis. Is something like that possible, or do I have to somehow calculate the angle and then move on more than one axis?
Best regards!
EDIT:
I am rotating the camera where cameraPos is the current position of the camera and rotation is the current rotation of the camera. And this is the code to rotate the camera:
void Update()
{
...
if(pressed)
{
int newY = currentY - oldY;
pitch -= rotSpeed * newY;
}
Rotate();
}
void Rotate()
{
rotation = Matrix.CreateRotationX(pitch);
Vector3 transformedReference = Vector3.Transform(cameraPos, rotation);
Vector3 lookAt = cameraPos + transformedReference;
view = Matrix.CreateLookAt(cameraPos, lookAt, Vector3.Up);
oldY = currentY;
}
Ihope this is more readable.
I was able to solve this problem by using:
Vector3 v;
if (state.IsKeyDown(Keys.Up))
v = new Vector3(0, 0, 1) * moveSpeed;
... //Other code for moving down,left,right
if (state.IsKeyDown(Keys.V))
view *= Matrix.CreateRotationX(MathHelper.ToRadians(-5f) * rotSpeed); //Multiplying view Matrix to create rotation
view *= Matrix.CreateTranslation(v); //Multiplying view Matrix to create movement by Vector3 v
I suppose you're already saving the direction you're looking at in a Vector3. Replace your method with this:
direction.Normalize();
var move = direction * moveSpeed;
cameraPos += move;
I have a control that zooms in and out, and pans. By clicking a choice of two buttons, the user may zoom in or out respectively. My issue is with the translations after zooming in or out: If I zoom out to about .2F, I have to click-pan multiple times to move the same distance I could have at base 0. I seemed to solved this by dividing the zoom by its self squared: z/(z*z) but that seems to transform the entire matrix - and I'm NOT translating the matrix with this on any line!
Here is my code:
Matrix mtrx = new Matrix();
mtrx.Translate(pan.X, pan.Y);
mtrx.Scale(zoom, zoom);
e.Graphics.Transform = mtrx;
// To-Do drawing code here
Basic notes:
Zoom increases and decreases by .12F on button click event
The control being painted on is an inherited UserControl class
The developing environment is C# 2010
The drawing code deals with two rectangles who's location is {0, 0}
I just want to be able to pan at the same speed as I can at base 0 (not zoomed in or out) when I am zoomed in or out a ways. ~ no motion-lag feelings
EDIT: I've updated the code above to an improved version of what I was dealing with before. This handles the panning speed for after zooming, but the new issue is with where the zoom is taking place: it's as if the point of zoom is being translated as well...
Edit: How I would do this in 3D (XNA):
public class Camera
{
private float timeLapse = 0.0f;
private Vector3 position, view, up;
public Camera(Vector2 windowSize)
{
viewMatrix = Matrix.CreateLookAt(this.position, this.view, this.up);
projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.Pi / 4.0f,
(float)windowSize.X / (float)windowSize.Y, 0.005f, 1000.0f);
}
//Sets time lapse between frames
public void SetFrameInterval(GameTime gameTime)
{
timeLapse = (float)gameTime.ElapsedGameTime.Milliseconds;
}
//Move camera and view position according to gamer's move and strafe events.
private void zoomHelper(Vector3 direction, float speed)
{
speed *= (float)timeLapse; // scale rate of change
direction *= speed;
position.Y += direction.Y;
position.X += direction.X; // adjust position
position.Z += direction.Z;
view.Y += direction.Y;
view.X += direction.X; // adjust view change
view.Z += direction.Z;
}
//Allows the camera to move forward or backward in the direction it is looking.
public void Zoom(float amount)
{
Vector3 look = new Vector3(0.0f, 0.0f, 0.0f);
Vector3 unitLook;
// scale rate of change in movement
const float SCALE = 0.005f;
amount *= SCALE;
// update forward direction
look = view - position;
// get new camera direction vector
unitLook = Vector3.Normalize(look);
// update camera position and view position
zoomHelper(unitLook, amount);
viewMatrix = Matrix.CreateLookAt(position, view, up);
}
//Allows the camera to pan on a 2D plane in 3D space.
public void Pan(Vector2 mouseCoords)
{
// The 2D pan translation vector in screen space
Vector2 scaledMouse = (mouseCoords * 0.005f) * (float)timeLapse;
// The camera's look vector
Vector3 look = view - position;
// The pan coordinate system basis vectors
Vector3 Right = Vector3.Normalize(Vector3.Cross(look, up));
Vector3 Up = Vector3.Normalize(Vector3.Cross(Right, look));
// The 3D pan translation vector in world space
Vector3 Pan = scaledMouse.X * Right + -scaledMouse.Y * Up;
// Translate the camera and the target by the pan vector
position += Pan;
view += Pan;
viewMatrix = Matrix.CreateLookAt(position, view, up);
}
}
Used like this:
Camera cam; // object
effect.Transform = cam.viewMatrix * cam.projectionMatrix;
// ToDo: Drawing Code Here
try this:
Matrix mtrx = new Matrix();
mtrx.Scale(zoom, zoom);
mtrx.Translate(pan.X/zoom, pan.Y/zoom, MatrixOrder.Append);
e.Graphics.Transform = mtrx;
// To-Do drawing code here
Can anyone see where I am going wrong here.
I have a CameraObject class (its not a camera, simply the Model of a box to represent a "camera") that has a Model and a Position. It also has the usual LoadContent(), Draw() and Update() methods.
However, when I draw the array of Models, I only see 1 model on the screen (well, there might be 3 but they might all be in the same location)?
The Draw() method for the CameraModel class looks like this:
public void Draw(Matrix view, Matrix projection)
{
transforms = new Matrix[CameraModel.Bones.Count];
CameraModel.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model
foreach(ModelMesh myMesh in CameraModel.Meshes)
{
foreach (BasicEffect myEffect in myMesh.Effects)
{
myEffect.World = transforms[myMesh.ParentBone.Index];
myEffect.View = view;
myEffect.Projection = projection;
myEffect.EnableDefaultLighting();
myEffect.SpecularColor = new Vector3(0.25f);
myEffect.SpecularPower = 16;
}
myMesh.Draw();
}
}
Then in my Game1 class I create an array of CameraObject objects:
CameraObject[] cameraObject = new CameraObject[3];
Which I Initialize() - so each new object should be at +10 from the previous object
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i] = new CameraObject();
cameraObject[i].Position = new Vector3(i * 10, i * 10, i * 10);
}
And finally Draw()
Matrix view = camera.viewMatrix;
Matrix projection = camera.projectionMatrix;
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i].Draw(view, projection);
}
Where view and projection are from my Camera() class which looks like so:
viewMatrix = Matrix.Identity;
projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 16 / 9, .5f, 500f);
But I only see 1 object drawn to the screen? I have stepped through the code and all seems well but I cant figure out why I cant see 3 objects?
Can anyone spot where I am going wrong?
This is the code in my Camera() class to UpdateViewMatrix:
private void UpdateViewMatrix(Matrix chasedObjectsWorld)
{
switch (currentCameraMode)
{
case CameraMode.free:
// To be able to rotate the camera and and not always have it looking at the same point
// Normalize the cameraRotation’s vectors, as those are the vectors that the camera will rotate around
cameraRotation.Forward.Normalize();
cameraRotation.Up.Normalize();
cameraRotation.Right.Normalize();
// Multiply the cameraRotation by the Matrix.CreateFromAxisAngle() function,
// which rotates the matrix around any vector by a certain angle
// Rotate the matrix around its own vectors so that it works properly no matter how it’s rotated already
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Right, pitch);
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Up, yaw);
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
// After the matrix is rotated, the yaw, pitch, and roll values are set back to zero
yaw = 0.0f;
pitch = 0.0f;
roll = 0.0f;
// The target is changed to accommodate the rotation matrix
// It is set at the camera’s position, and then cameraRotation’s forward vector is added to it
// This ensures that the camera is always looking in the direction of the forward vector, no matter how it’s rotated
target = Position + cameraRotation.Forward;
break;
case CameraMode.chase:
// Normalize the rotation matrix’s forward vector because we’ll be using that vector to roll around
cameraRotation.Forward.Normalize();
chasedObjectsWorld.Right.Normalize();
chasedObjectsWorld.Up.Normalize();
cameraRotation = Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
// Each frame, desiredTarget will be set to the position of whatever object we’re chasing
// Then set the actual target equal to the desiredTarget, can then change the target’s X and Y coordinates at will
desiredTarget = chasedObjectsWorld.Translation;
target = desiredTarget;
target += chasedObjectsWorld.Right * yaw;
target += chasedObjectsWorld.Up * pitch;
// Always want the camera positioned behind the object,
// desiredPosition needs to be transformed by the chased object’s world matrix
desiredPosition = Vector3.Transform(offsetDistance, chasedObjectsWorld);
// Smooth the camera’s movement and transition the target vector back to the desired target
Position = Vector3.SmoothStep(Position, desiredPosition, .15f);
yaw = MathHelper.SmoothStep(yaw, 0f, .1f);
pitch = MathHelper.SmoothStep(pitch, 0f, .1f);
roll = MathHelper.SmoothStep(roll, 0f, .1f);
break;
case CameraMode.orbit:
// Normalizing the rotation matrix’s forward vector, and then cameraRotation is calculated
cameraRotation.Forward.Normalize();
// Instead of yawing and pitching over cameraRotation’s vectors, we yaw and pitch over the world axes
// By rotating over world axes instead of local axes, the orbiting effect is achieved
cameraRotation = Matrix.CreateRotationX(pitch) * Matrix.CreateRotationY(yaw) * Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
desiredPosition = Vector3.Transform(offsetDistance, cameraRotation);
desiredPosition += chasedObjectsWorld.Translation;
Position = desiredPosition;
target = chasedObjectsWorld.Translation;
roll = MathHelper.SmoothStep(roll, 0f, .2f);
break;
}
// Use this line of code to set up the View Matrix
// Calculate the view matrix
// The up vector is based on how the camera is rotated and not off the standard Vector3.Up
// The view matrix needs an up vector to fully orient itself in 3D space, otherwise,
// the camera would have no way of knowing whether or not it’s upside-down
viewMatrix = Matrix.CreateLookAt(Position, target, cameraRotation.Up);
}
I'm not seeing in your code where your cameraObject[n].Position (which is probably the only thing that uniquely differentiates the position between the three models) is affecting the effect.World property.
effect.World = transforms[myMesh.ParentBone.Index];
does not typically take individual model position into account.
Try something like this:
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i].Draw(view, projection, cameraObject[i].Position);
}
//later in the draw method
public void Draw(Matrix view, Matrix projection, Vector3 pos)
{
// ...
myEffect.World = transforms[myMesh.ParentBone.Index] * Matrix.CreateTranslation(pos);
// ...
}
CameraObject[1] & [2] are located at a greater positive Z value than camera[0] and the view matrix is located at the origin & looking in a negative Z direction (remember, the view matrix is the inverted equivalent of a world matrix).
Instead of setting your viewMatrix to Matrix.Identity, set it to this and you might see all three:
viewMatrix = Matrix.CreateLookAt(new Vector3(0,0,75), Vector3.Zero, Vector3.Up);
when i control the spaceship in 1 axis all is fine, that is, the depth(z) and the rotation on the z plane 360 degrees, so thats 2 axis. I also have a camera right behind it which i have to maintain its position. When the 3rd comes in place all go bad. let me show you some code:
Here is the part that fails:
draw method of spaceship :
public void Draw(Matrix view, Matrix projection)
{
public float ForwardDirection { get; set; }
public float VerticalDirection { get; set; }
Matrix[] transforms = new Matrix[Model.Bones.Count];
Model.CopyAbsoluteBoneTransformsTo(transforms);
Matrix worldMatrix = Matrix.Identity;
Matrix worldMatrix2 = Matrix.Identity;
Matrix rotationYMatrix = Matrix.CreateRotationY(ForwardDirection);
Matrix rotationXMatrix = Matrix.CreateRotationX(VerticalDirection); // me
Matrix translateMatrix = Matrix.CreateTranslation(Position);
worldMatrix = rotationYMatrix * translateMatrix;
worldMatrix2 = rotationXMatrix * translateMatrix;
//worldMatrix*= rotationXMatrix;
foreach (ModelMesh mesh in Model.Meshes) //NEED TO FIX THIS
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World =
worldMatrix * transforms[mesh.ParentBone.Index]; ; //position
//effect.World =
//worldMatrix2 * transforms[mesh.ParentBone.Index]; ; //position
effect.View = view; //camera
effect.Projection = projection; //2d to 3d
effect.EnableDefaultLighting();
effect.PreferPerPixelLighting = true;
}
mesh.Draw();
}
}
For the extra axis the worldMatrix2 is implemented which i DONT know how to combine with the other axis. Do i multiply it? Also:
The camera update method has a similar problem:
public void Update(float avatarYaw,float avatarXaw, Vector3 position, float aspectRatio)
{
//Matrix rotationMatrix = Matrix.CreateRotationY(avatarYaw);
Matrix rotationMatrix2 = Matrix.CreateRotationX(avatarXaw);
//Vector3 transformedheadOffset =
//Vector3.Transform(AvatarHeadOffset, rotationMatrix);
Vector3 transformedheadOffset2 = Vector3.Transform(AvatarHeadOffset, rotationMatrix2);
//Vector3 transformedheadOffset2 = Vector3.Transform(transformedheadOffset, rotationMatrix2);
//Vector3 transformedReference =
//Vector3.Transform(TargetOffset, rotationMatrix);
Vector3 transformedReference2 = Vector3.Transform(TargetOffset, rotationMatrix2);
//Vector3 transformedReference2 = Vector3.Transform(transformedReference, rotationMatrix2);
Vector3 cameraPosition = position + transformedheadOffset2; /** + transformedheadOffset; */
Vector3 cameraTarget = position + transformedReference2; /** + transformedReference; */
//Calculate the camera's view and projection
//matrices based on current values.
ViewMatrix =
Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
ProjectionMatrix =
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(GameConstants.ViewAngle), aspectRatio,
GameConstants.NearClip, GameConstants.FarClip);
}
}
Lastly here is the method of the Game class Update:
spaceship.Update(currentGamePadState,
currentKeyboardState); // this will be cahnged when asteroids are placed in the game, by adding a new parameter with asteroids.
float aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio;
gameCamera.Update(spaceship.ForwardDirection,spaceship.VerticalDirection,
spaceship.Position, aspectRatio);
Somewhere in your code, you probably have methodology that manipulates and sets the variables 'ForwardDirection' & 'VerticalDirection' which seem to represent angular values around the Y & X axis respectively. Presumably you may intend to have a variable that represents an angular value around the Z axis as well. I'm also assuming (and your code implies) that these variables are ultimately how you store your spaceship's orientation from frame to frame.
As long as you continue to try to represent an orientation using these angles, you will find it difficult to achieve the control of your spaceship.
There are several types of orientation representations available. The 3 angles approach has inherent weaknesses when it comes to combining rotations in 3 dimensions.
My recommendation is that you shift paradigms and consider storing and manipulating your orientations in a Matrix or a Quaternion form. Once you learn to manipulate a matrix or quaternion, what you are trying to do becomes almost unbelievably easy.