XNA setting isometric camera view or world? - c#

I'm trying to create an isometric (35 degrees) view by using a camera.
I'm drawing a triangle which rotates around Z axis.
For some reason the triangle is being cut at a certain point of the rotation
giving this result
I calculate the camera position by angle and z distance
using this site: http://www.easycalculation.com/trigonometry/triangle-angles.php
This is how I define the camera:
// isometric angle is 35.2ยบ => for -14.1759f Y = 10 Z
Vector3 camPos = new Vector3(0, -14.1759f, 10f);
Vector3 lookAt = new Vector3(0, 0, 0);
viewMat = Matrix.CreateLookAt(camPos, lookAt, Vector3.Up);
//projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, Game.GraphicsDevice.Viewport.AspectRatio, 1, 100);
float width = GameInterface.vpMissionWindow.Width;
float height = GameInterface.vpMissionWindow.Height;
projMat = Matrix.CreateOrthographic(width, height, 1, 1000);
worldMat = Matrix.Identity;
This is how I recalculate the world matrix rotation:
worldMat = Matrix.CreateRotationZ(3* visionAngle);
// keep triangle arround this Center point
worldMat *= Matrix.CreateTranslation(center);
effect.Parameters["xWorld"].SetValue(worldMat);
// updating rotation angle
visionAngle += 0.005f;
Any idea what I might be doing wrong? This is my first time working on a 3D project.

Your triangle is being clipped by the far-plane.
When your GPU renders stuff, it only renders pixels that fall within the range (-1, -1, 0) to (1, 1, 1). That is: between the bottom left of the viewport and the top right. But also between some "near" plane and some "far" plane on the Z axis. (As well as doing clipping, this also determines the range of values covered by the depth buffer.)
Your projection matrix takes vertices that are in world or view space, and transforms them so that they fit inside that raster space. You may have seen the standard image of a view frustum for a perspective projection matrix, that shows how the edges of that raster region appear when transformed back into world space. The same thing exists for orthographic projections, but the edges of the view region are parallel.
The simple answer is to increase the distance to your far plane so that all your geometry falls within the raster region. It is the fourth parameter to Matrix.CreateOrthographic.
Increasing the distance between your near and far plane will reduce the precision of your depth buffer - so avoid making it any bigger than you need it to be.

I think your far plane is cropping it, so you should make bigger the projection matrix far plane argument...
projMat = Matrix.CreateOrthographic(width, height, 1, 10000);

Related

How to rotate a Vector3 of dimensions by a Vector3 of angles?

I have a Vector 3 of how many blocks in a grid a piece is along each axis. For example if one of these vectors was ( 1, 2, 1 ) it would be 1 block long on the x-axis, 2 blocks long on the y-axis, and one block long on the z-axis. I also have a Vector 3 of angles that denote rotations along each axis. For example if one of these vectors was ( 90, 180, 0 ) the piece would be rotated by 90 degrees around the x-axis, 180 degrees around the y-axis, and 0 degrees around the z-axis. What I can't figure out is how to rotate the dimensions of a piece by its vector of rotation angles so i know what points in space its occupying.
public class Block
{
private Vector3 localOrientation;
private Vector3 dimensions;
public Vector3 GetRotatedDimensions()
{
//your implementation here
}
}
If I understand correctly, there is something fundamentally wrong with your question. There can be no "rotated dimensions". Let's use a rectangle to demonstrate this. (I didn't undestand correctly)
Suppose there's this initial rectangle:
and you rotate it. This is what you get:
Using a single Vector2, you can't differentiate a "rotated x*y rectangle" from a "initial (x')*(y') rectangle". To sufficiently describe the position of a rectangle, you need to keep the size AND the rotation in your block-describing variable.
Is x' and y' what you wanted to know? I doubt it. Oh, you do? Great!
In 3 dimensions, I would define what you're looking for as
The minimum dimensions of a rectangular box that
1. has its faces parallel to the XY, XZ and YZ planes and
2. contains another rectangular box of known dimensions and orientation.
There are possibly more elegant solutions, but I'd brute force it like this:
Make 8 Vector3 objects (one for each vertex of your block),
Rotate all of them around the x-axis.
Rotate them (the new ones you got from "2") around the y-axis.
Rotate them (the new ones you got from "3") around the z-axis.
Find the min and max values of the x, y and z coordinates among all your points.
Your new dimensions would be (x_max-x_min), (y_max-y_min), (z_max-z_min).
I'm not 100% sure about this though, so make sure you verify the results!

Accelerometer movement limit according to screen size

I am using this code:
void Update()
{
accel = Input.acceleration.x;
transform.Translate (0, 0, accel);
transform.position = new Vector3 (Mathf.Clamp (transform.position.x,-6.9f, 6.9f), -4.96f, 18.3f);
}
It makes my gameObject move the way I want it BUT the problem is that when I put the app on my phone, (-6.9) and (6.9) are not the "ends" of my screen. And I cannot figure out how to change those values according to every screen size?
This is going to be a bit of a longer answer, so please bear with me here.
Note that this post is assuming that you are using an orthographic camera - the formula used won't work for perspective cameras.
As far as I can understand your desire is to keep your object inside of the screen boundaries. Screen boundaries in Unity are determined by a combination of camera size and screen size.
float height = Camera.main.orthographicSize * 2;
A camera's orthographic size determines the half of the screen's size in world units. Multiplying this value results in the amount of world units between the top and bottom of the screen.
To get the width from this value, we take the value and multiplying it by the screen's width divided by the screen's height.
float width = height * Screen.width / Screen.height;
Now we have the dimensions of our screen, but we still need to keep the object inside those bounds.
First, we create an instance of the type Bounds, which we will use to determine the maximum and minimum values for the position.
Bounds bounds = new Bounds (Vector3.zero, new Vector3(width, height, 0));
Note that we used Vector3.zero since the center of our bounds instance should be the world's center. This is the center of the area that your object should be able to move inside.
Lastly, we clamp the object's position values, according to our resulting bounds' properties.
Vector3 clampedPosition = transform.position;
clampedPosition.x = Mathf.Clamp(clampedPosition.x, bounds.min.x, bounds.max.x);
transform.position = clampedPosition;
This should ensure that your object will never leave the screen's side boundaries!

Camera units in Unity5

i'm currently programming an 2D topview unity game. And i want to set the camera such as, that just a specific area is visible. That means that i know the size of my area and when the camera, which is following the player currently reaches the border of the area, i want to visible stops.
So here is my question: i know where the camera is and how it can follow the player but i dont know how i can calculate the distance between the border of the field and the border of waht the camera sees. how can i do that?
Essentially, treat your playable area as a rectangle. Then, make a smaller rectangle within that rectangle that accounts for the camera's orthographic size. Don't forget to include the aspect ratio of your camera when calculating horizontal bounds.
Rect myArea; // this stores the bounds of your playable area
Camera cam; // this is your orthographic camera, probably Camera.main
GameObject playerObject; // this is your player
float newX = Mathf.Clamp(
playerObject.transform.position.x,
myArea.xMin + cam.orthographicSize * cam.aspect,
myArea.xMax - cam.orthographicSize * cam.aspect
);
float newY = Mathf.Clamp(
playerObject.transform.position.y,
myArea.yMin + cam.orthographicSize,
myArea.yMax - cam.orthographicSize
);
cam.transform.position = new Vector3(newX,newY,cam.transform.position.z);
If you're using an alternative plane (say xz instead of xy), just swap out the corresponding dimensions in all the calculations.

Rendering a point or plane in 3D

Although my current project is XNA, this question is about the basic mathematics of 3D to 2D mapping. In fact, and for the same reason, let's assume a WinForms graphics surface to draw on.
I have the following configuration:
Camera position of (x=0, y=0, z=0) and direction vector of (x=0, y=0, z=0).
A line segment in 3D with the following points: (10, 10, 10), (100, 100, 100).
I want to transform these coordinates and draw them on a 2D surface. So depending on the camera, the line segment should transform from (x1, y1, z1),(x2, y2, z2) to (x1, y1),(x2, y2).
I think you are looking for an orthogonal or perspective projection. There is a lot of information online if you search for it but here is the gist.
A camera looking at the origin, located a distance d along the z-axis will project a point at (x,y,z) onto a plane as:
// Orthogonal
planar_x = x
planar_y = y
// Perspective
planar_x = x*d/(d-z)
planar_y = y*d/(d-z)
Example
A point at (10,10,10) with the camera located a distance of 500 along the z axis will have planar coordinates (10*500/(500-10), 10*500/(500-10)) = (10.204, 10.204)
A point at (10,10,100) with the camera located a distance of 500 along the z axis will have planar coordinates (10*500/(500-100), 10*500/(500-100)) = (12.5, 12.5)
So the closer a shape is to the camera the larger it appears.
To transform the planar model coordinates to pixel coordinates I use the following scaling
scale = max_model_size/Math.Min(Height,Width);
pixel_x = Width/2 + x/scale;
pixel_y = Height/2- y/scale;
This is how I can use GDI to draw 3D shapes on a windows form.
Of course if you want to use OpenGL then look here for a similar question.

XNA 4.0: 2D Camera Y and X are going in wrong direction

So I know there are a few questions/answers regarding building a 2D Camera for XNA however people seem to just be happy posting their code without explanation.
I'm looking for more of an explanation of what I'm doing wrong.
First off, I understand the whole World -> View - > Projection - > Screen transformation.
My goal is to have a camera object that is centered in the center of the viewport and that when the camera's position moves up it correlates to moving up in the viewport and when it moves to the right it correlates moving right in the viewport.
I'm having difficulty implementing that functionality because the Y value of the viewport is inverted.
//In Camera Class
private void UpdateViewTransform()
{
//My thinking here was that I would create a projection matrix to center the camera and then flip the Y axis appropriately
Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0)) *
Matrix.CreateScale(new Vector3(1f, -1f, 1f));
//Here is the camera Matrix. I have to give the Inverse of this matrix to the Spritebatch I believe since I want to go from World Coordinates to Camera Coordinates
_viewMatrix = Matrix.CreateRotationZ(_rotation) *
Matrix.CreateScale(new Vector3(_zoom, _zoom, 1.0f)) *
Matrix.CreateTranslation(_position.X, _position.Y, 0.0f);
_viewMatrix = proj * _viewMatrix;
}
Can someone help me understand how I can build my view transformation to pass into the SpriteBatch so that I achieve what I'm looking for.
EDIT
This as a transformation seems to work however I am unsure why. Can someone perhaps break it down for me in understanding:
Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0));
_viewMatrix = Matrix.CreateRotationZ(_rotation) *
Matrix.CreateScale(new Vector3(_zoom, _zoom, 1.0f)) *
Matrix.CreateTranslation(-1 * _position.X, _position.Y, 0.0f);
_viewMatrix = proj * _viewMatrix;
I've built a raytracer before so I should understand your understanding, my confusion lies with the fact it's 2D and that SpriteBatch is hiding what it's doing from me.
Thanks!
Farid
If you flip everything on the Y axis with your scale matrix, that means you are flipping the models that SpriteBatch is drawing (textured quads). This means you also have to change your winding-order (ie: backface culling is interpreting that you are drawing the backs of the triangles facing the camera, so it culls them, so you have to change the rule that it uses).
By default, SpriteBatch uses RasterizerState.CullCounterClockwise. When you call SpriteBatch.Begin you need to pass in RasterizerState.CullClockwise instead.
And, of course, Begin is where you pass in your transformation matrix.
I haven't carefully checked your matrix operations - although I have a suspicion that the order is incorrect. I recommend you create a very simple testing app and build up your transformations one at a time.
I've fought with XNA a lot trying to get it to be like other engines I have worked with before... my reccomendation, it isn't worth it... Just go with the XNA standards and use the Matrix helper methods for creating your Perspective / Viewport matrices.
So after just thinking logically, I was able to deduce the proper transformation.
I'll outline the steps here incase anyone wants the real breakdown:
It is important to understand what is a Camera Transformation or View Transformation.
A View Transformation is normally what is needed to go from Coordinates relative to your Camera to World-Space coordinates. The inverse of the View Transformation would then make a world coordinate relative to your Camera!
Creating a View Matrix
Apply the camera's rotation. We are doing it in the Z axis only because this is for a 2D camera.
Apply the transformation of the Camera. This make sense since we want the World coordinate which is the sum of the camera coordinate and the object relative to the camera.
You might notice that I multiplied the Y Component by -1. This is so that increasing the Camera's Y Position correlates to moving up since the screen's Y value points downward.
Apply the camera's zoom
Now the inverse of the matrix will do World Coordinate -> View Cordinates.
I also chose to center my camera in the center of the screen, so I included a pre-appended translation.
Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0));
_viewMatrix = Matrix.CreateRotationZ(moveComponent.Rotation) *
Matrix.CreateTranslation(moveComponent.Position.X, -1 * moveComponent.Position.Y, 0.0f) *
Matrix.CreateScale(new Vector3(zoomComponent.Zoom, zoomComponent.Zoom, 1.0f));
_viewMatrix = proj * Matrix.Invert(_viewMatrix);

Categories

Resources