Perspective Projection Matrix transforms like Orthogonal - c#

Suppose I have a point which is not in (0, 0, 0) and a perspective camera which is looking at (0, 0, 0).
To my understanding, if I move the perspective camera along the z axis, the point on the screen should move as well. The further the camera is, the closer the point should be towards (0, 0) in screen coordinates.
In my C# program, the camera movement does not affect screen coordinates (x, y) at all. It only changes the z coordinate just like an orthogonal camera. Here is the minimal example:
Vector3 v = new Vector3(3.0f);
// Move camera z to -10 from the center
Matrix4x4 viewMatrix = Matrix4x4.CreateLookAt(new Vector3(0.0f, 0.0f, -10.0f), new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 1.0f, 0.0f));
Matrix4x4 projectionMatrix = Matrix4x4.CreatePerspectiveFieldOfView((float)Math.PI / 3.0f, 1.0f, 0.1f, 100.0f);
Vector3 v1 = Vector3.Transform(v, viewMatrix * projectionMatrix);
Console.WriteLine(v1); //<-5.1961527, 5.1961527, 12.912912>
// Move camera z to -1 from the center
viewMatrix = Matrix4x4.CreateLookAt(new Vector3(0.0f, 0.0f, -1.0f), new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 1.0f, 0.0f));
Vector3 v2 = Vector3.Transform(v, viewMatrix * projectionMatrix);
Console.WriteLine(v2); //<-5.1961527, 5.1961527, 3.903904>
What is wrong in my reasoning?

Maybe the W of projected coord is not 1.
In vertex shaders, the returned W means point or vector. If 1, it is point. But for some reason, the vertex shaders automately divide XYZ by W. Therefore, the vertex shader need not to set W as 1 explicitly and almost of 3D math libraries return W as divisor instead of 1.
If you want to get proper projected coord, manually divide. Vector4.Transform() and divide XYZ by the W.

Related

c# monogame 3d render billboard always facing camera with ability to rotation along X axis

I'm programming a 3D game in C# monogame, and I would like the particles to always face the camera. The code below works for that. I only send the particles texture, size and rotation to HLSL, which allows me to calculate the corners on the GPU.
This is the code that works without the particles rotating
output.PositionWS is the vertex world position
CameraPosWS is the camera world position
size.x is just the X size of billboard
size.y is just the Y size of the billboard
output.PositionWS = mul(float4(input.inPositionOS, 1), World).xyz;
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = normalize(cross(ParticleToCamera, ParticleUp));
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * ParticleRight;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * ParticleUp;
But I would like the particle to now rotate while still always facing the camera. I was thinking of using a matrix to transform the calculated corners of the billboard to rotate the billboard, but I haven't been able to get it to work.
This is the code with billboard rotation along the X axis which doesn't work
rotation is a rotation in radians
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = normalize(cross(ParticleToCamera, ParticleUp));
float3x3 RotationMatrix = float3x3(
ParticleRight,
ParticleUp,
ParticleToCamera
);
float3 rotatedVertex = mul(float3(input.inPositionOS.xy, 0), RotationMatrix);
// Only apply rotation on the X axis
rotatedVertex.x = rotatedVertex.x * cos(rotation);
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * rotatedVertex.x;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * rotatedVertex.y;
You are overthinking this problem. The point of the Vertex shader is to transform from and object/world coordinate system into screen based coordinate system.
The orientation is already screen aligned.
I will assume that scaling is accounted for in the size variable. Otherwise, scale it using ParticleToCamera.z.
Please note the following code is incomplete without the initial finalPosition definition: I have assumed it to be PositionWS.
output.PositionWS = mul(float4(input.inPositionOS, 1), World).xyz;
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = float3(1, 0, 0);
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * ParticleRight;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * ParticleUp;

Unity - Determining UVs for a circular plane mesh generated by code

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

OpenGL - Rotating a plane around a specific axis?

I am rotating a plane composite of these vertices,
Vector3[] VertexPositionData = new Vector3[]{
new Vector3( 0f, 0f, 1f),
new Vector3( 1f, 0f, 1f),
new Vector3( 1f, 1f, 1f),
new Vector3( 0f, 1f, 1f)
};
on it's y-axis with:
Rotation.Y += (float)Math.PI / 4;
The effect is shown above. But I'd rather the plane rotated around it's left edge, so that the yellow remains fixed to the red.
The model matrix is calculated as per usual with,
public Matrix4 GetModelMatrix()
{
return Matrix4.Scale(Scale) * Matrix4.CreateRotationX(Rotation.X) * Matrix4.CreateRotationY(Rotation.Y) * Matrix4.CreateRotationZ(Rotation.Z) * Matrix4.CreateTranslation(Position);
}
Besides modifying the X and Z positions, how can I achieve this?
The plane is rotating around it's local Y axis, if you want the plane to rotate around it's left edge, you need to align it with the Y axis. In your case, the plane is 1 unit far from the origin on the Z axis.
You can either modify it's vertices like so:
Vector3[] VertexPositionData = new Vector3[]{
new Vector3( 0f, 0f, 0f),
new Vector3( 1f, 0f, 0f),
new Vector3( 1f, 1f, 0f),
new Vector3( 0f, 1f, 0f)
};
Or, you can translate it 1 unit on the Z axis so it would be aligned on the origin (0, 0, 0) like so:
Matrix4 result = Matrix4.Scale(Scale) * Matrix4.CreateTranslation(new Vector3(0, 0, -1)) * Matrix4.CreateRotationX(Rotation.X) * Matrix4.CreateRotationY(Rotation.Y) * Matrix4.CreateRotationZ(Rotation.Z) * Matrix4.CreateTranslation(Position);

OpenGL. Moving light source

in my program I use OpenTK with C#. And, I have a trouble with light source. I can't tie it to the camera. It only stay on fixed position.
Here is code of glControl1_Load():
float[] light_ambient = { 0.2f, 0.2f, 0.2f, 1.0f };
float[] light_diffuse = { 1.0f, 1.0f, 1.0f, 1.0f };
float[] light_specular = { 1.0f, 1.0f, 1.0f, 1.0f };
float[] spotdirection = { 0.0f, 0.0f, -1.0f };
GL.Light(LightName.Light0, LightParameter.Ambient, light_ambient);
GL.Light(LightName.Light0, LightParameter.Diffuse, light_diffuse);
GL.Light(LightName.Light0, LightParameter.Specular, light_specular);
GL.Light(LightName.Light0, LightParameter.ConstantAttenuation, 1.8f);
GL.Light(LightName.Light0, LightParameter.SpotCutoff, 45.0f);
GL.Light(LightName.Light0, LightParameter.SpotDirection, spotdirection);
GL.Light(LightName.Light0, LightParameter.SpotExponent, 1.0f);
GL.LightModel(LightModelParameter.LightModelLocalViewer, 1.0f);
GL.LightModel(LightModelParameter.LightModelTwoSide, 1.0f);
GL.Enable(EnableCap.Light0);
GL.Enable(EnableCap.Lighting);
GL.Enable(EnableCap.DepthTest);
GL.Enable(EnableCap.ColorMaterial);
GL.ShadeModel(ShadingModel.Flat);
glControl1_Paint():
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref cameramatrix);
GL.Light(LightName.Light0, LightParameter.Position, new float[]{0.0f, 0.0f, 0.0f, 1.0f});
If I'm not wrong, the coordinates light source stored in eye space coord. So, what's wrong?
LoadIdentity instead of your camera matrix for the model view. Your light source will always stay at the same location relative to your camera.
"If the w value is nonzero, the light is positional, and the (x, y, z) values specify the location of the light in homogeneous object coordinates. (See Appendix F.) This location is transformed by the modelview matrix and stored in eye coordinates."
more details here Look for "Example 5-7"

XNA - how to draw an object farther

Here's how I draw some shape defined by vertices not shown here.
Vector3 position = (5,5,1);
Matrix world = Matrix.CreateTranslation(position);
BasicEffect basicEffect = new BasicEffect(graphicsDevice);
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, -20), new Vector3(0, 0, 100), Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
graphics.Viewport.AspectRatio,
1.0f,
100);
// Set BasicEffect parameters.
basicEffect.World = world;
basicEffect.View = view;
basicEffect.Projection = projection;
//....draw some shape with basicEffect
I would like to paint the same shape only farther away so that its center stays in the same (x,y) pixel on screen but it is overall smaller as it's more distant.
I've tried scaling the position vector but had no success with it:
position .Z *= 2;
position .X *= 2;
position .Y *= 2;
What's the right way to do this?
Think about it geometrically: moving the object away from the camera means moving it along a line defined by two points: the camera's position and the object's position.
Now it's easy!
1) Find the vector object-to-camera, i.e.
Vector3 direction = objectPosition - cameraPosition;
2) Move the object alongside that vector by a certain amount, that is:
2.1) Normalize the direction
direction.Normalize();
2.2) Move the object by an amount x in that direction
objectPosition += direction * x;
And there you have it.

Categories

Resources