OpenGL - Rotating a plane around a specific axis? - c#

I am rotating a plane composite of these vertices,
Vector3[] VertexPositionData = new Vector3[]{
new Vector3( 0f, 0f, 1f),
new Vector3( 1f, 0f, 1f),
new Vector3( 1f, 1f, 1f),
new Vector3( 0f, 1f, 1f)
};
on it's y-axis with:
Rotation.Y += (float)Math.PI / 4;
The effect is shown above. But I'd rather the plane rotated around it's left edge, so that the yellow remains fixed to the red.
The model matrix is calculated as per usual with,
public Matrix4 GetModelMatrix()
{
return Matrix4.Scale(Scale) * Matrix4.CreateRotationX(Rotation.X) * Matrix4.CreateRotationY(Rotation.Y) * Matrix4.CreateRotationZ(Rotation.Z) * Matrix4.CreateTranslation(Position);
}
Besides modifying the X and Z positions, how can I achieve this?

The plane is rotating around it's local Y axis, if you want the plane to rotate around it's left edge, you need to align it with the Y axis. In your case, the plane is 1 unit far from the origin on the Z axis.
You can either modify it's vertices like so:
Vector3[] VertexPositionData = new Vector3[]{
new Vector3( 0f, 0f, 0f),
new Vector3( 1f, 0f, 0f),
new Vector3( 1f, 1f, 0f),
new Vector3( 0f, 1f, 0f)
};
Or, you can translate it 1 unit on the Z axis so it would be aligned on the origin (0, 0, 0) like so:
Matrix4 result = Matrix4.Scale(Scale) * Matrix4.CreateTranslation(new Vector3(0, 0, -1)) * Matrix4.CreateRotationX(Rotation.X) * Matrix4.CreateRotationY(Rotation.Y) * Matrix4.CreateRotationZ(Rotation.Z) * Matrix4.CreateTranslation(Position);

Related

Perspective Projection Matrix transforms like Orthogonal

Suppose I have a point which is not in (0, 0, 0) and a perspective camera which is looking at (0, 0, 0).
To my understanding, if I move the perspective camera along the z axis, the point on the screen should move as well. The further the camera is, the closer the point should be towards (0, 0) in screen coordinates.
In my C# program, the camera movement does not affect screen coordinates (x, y) at all. It only changes the z coordinate just like an orthogonal camera. Here is the minimal example:
Vector3 v = new Vector3(3.0f);
// Move camera z to -10 from the center
Matrix4x4 viewMatrix = Matrix4x4.CreateLookAt(new Vector3(0.0f, 0.0f, -10.0f), new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 1.0f, 0.0f));
Matrix4x4 projectionMatrix = Matrix4x4.CreatePerspectiveFieldOfView((float)Math.PI / 3.0f, 1.0f, 0.1f, 100.0f);
Vector3 v1 = Vector3.Transform(v, viewMatrix * projectionMatrix);
Console.WriteLine(v1); //<-5.1961527, 5.1961527, 12.912912>
// Move camera z to -1 from the center
viewMatrix = Matrix4x4.CreateLookAt(new Vector3(0.0f, 0.0f, -1.0f), new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 1.0f, 0.0f));
Vector3 v2 = Vector3.Transform(v, viewMatrix * projectionMatrix);
Console.WriteLine(v2); //<-5.1961527, 5.1961527, 3.903904>
What is wrong in my reasoning?
Maybe the W of projected coord is not 1.
In vertex shaders, the returned W means point or vector. If 1, it is point. But for some reason, the vertex shaders automately divide XYZ by W. Therefore, the vertex shader need not to set W as 1 explicitly and almost of 3D math libraries return W as divisor instead of 1.
If you want to get proper projected coord, manually divide. Vector4.Transform() and divide XYZ by the W.

Unity - Determining UVs for a circular plane mesh generated by code

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

vTextCoord value is not working - c# OPENTK

My app is displaying an image in full screen by using OpenGL shader code as shown below. The vertex shader I used here is copied from somewhere. Can anyone explain why here used vTexCoord = (a_position.xy+1)/2;? When I'm trying with vTexCoord = a_position.xy, my OpenGL output is split into four rectangles and only it's top right portion is showing the image. Other three sides are seem as blurred. What change should I do to work it with vTexCoord = a_position.xy ?
Some important functions used in the project are shown below. Please check and help to correct.
float[] vertices = {
// Left bottom triangle
-1f, -1f, 0f,
1f, -1f, 0f,
1f, 1f, 0f,
// Right top triangle
1f, 1f, 0f,
-1f, 1f, 0f,
-1f, -1f, 0f
};
private void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoord;
void main() {
vTexCoord = (a_position.xy+1)/2;
gl_Position = vec4(a_position, 1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"precision highp float;
uniform sampler2D sTexture;varying vec2 vTexCoord;
void main ()
{
vec4 color= texture2D (sTexture, vTexCoord);
gl_FragColor =color;
}");
GL.CompileShader(fragShader);
}
private void InitBuffers()
{
buffer = GL.GenBuffer();
positionLocation = GL.GetAttribLocation(program, "a_position");
positionLocation1 = GL.GetUniformLocation(program, "sTexture");
GL.EnableVertexAttribArray(positionLocation);
GL.BindBuffer(BufferTarget.ArrayBuffer, buffer);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * sizeof(float)), vertices, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(positionLocation, 3, VertexAttribPointerType.Float, false, 0, 0);
}
public void DrawImage(int image)
{
GL.Viewport(new Rectangle(0, 0, ScreenWidth, ScreenHeight));
GL.MatrixMode(MatrixMode.Projection);
GL.PushMatrix();
GL.LoadIdentity();
//GL.Ortho(0, 1920, 0, 1080, 0, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.PushMatrix();
GL.LoadIdentity();
GL.Disable(EnableCap.Lighting);
GL.Enable(EnableCap.Texture2D);
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, image);
GL.Uniform1(positionLocation1, 0);
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(0, 1);
GL.Vertex3(0, 0, 0);
GL.TexCoord2(0, 0);
GL.Vertex3(1920, 0, 0);
GL.TexCoord2(1, 1);
GL.Vertex3(1920, 1080, 0);
GL.TexCoord2(1, 0);
GL.Vertex3(0, 1080, 0);
GL.End();
RunShaders();
GL.Disable(EnableCap.Texture2D);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Projection);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Modelview);
glControl1.SwapBuffers();
}
private void RunShaders()
{
GL.UseProgram(program);
GL.DrawArrays(PrimitiveType.Triangles, 0, vertices.Length / 3);
}
Can anyone explain why here used vTexCoord = (a_position.xy+1)/2;?
The vertex coordinates in you example are in range [-1, 1], for the x and component. This matches to the normalized device space. The normalized device space is a cube from the left-lower-front (-1, -1, -1), to the right-top-back (1, 1, 1), this is the area wich is "visible". This area is mapped to the viewport.
This causes that the vertex coordinates, in your example, form a rectangle, which covers the entire viewport.
If the texture coordinates should wrap the entire texture to the quad, then the texture coordinates (u, v) have to be in range [0, 1]. (0, 0) is the lower left of the texture and (1, 1) the upper right.
See also How do OpenGL texture coordinates work?
So the x and y component of a_position have to be mapped form the range [-1, 1], to the range [0, 1], to be used as uv coordinates for the texture lookup:
u = (a_position.x + 1) / 2
v = (a_position.y + 1) / 2
What change should I do to work it with vTexCoord = a_position.xy?
This is not possible, but you can generate a separate texture coordinate attribute, which is common:
attribute vec3 a_position;
attribute vec2 a_texture;
varying vec2 vTexCoord;
void main() {
vTexCoord = a_texture;
gl_Position = vec4(a_position, 1);
}
float[] vertices = {
// x y z u v
// Left bottom triangle
-1f, -1f, 0f, 0f, 0f
1f, -1f, 0f, 1f, 0f
1f, 1f, 0f, 1f, 1f
// Right top triangle
1f, 1f, 0f, 1f, 1f
-1f, 1f, 0f, 0f, 1f
-1f, -1f, 0f 0f, 0f
};
buffer = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, buffer);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * sizeof(float)), vertices, BufferUsageHint.StaticDraw);
positionLocation = GL.GetAttribLocation(program, "a_position");
tetureLocation = GL.GetAttribLocation(program, "a_texture");
GL.EnableVertexAttribArray(positionLocation);
GL.EnableVertexAttribArray(tetureLocation);
int stride = sizeof(float) * 5; // 5 because of (x, y, z, u, v)
int offsetUV = sizeof(float) * 3; // 3 because the u and v coordinates are the 4th and 5th coordinate
GL.VertexAttribPointer(positionLocation, 3, VertexAttribPointerType.Float, false, stride, 0);
GL.VertexAttribPointer(tetureLocation, 2, VertexAttribPointerType.Float, false, stride, (IntPtr)(offsetUV));

How to check if device has been rotated on all axis in Unity

I want to check in Unity if the device has been rotated on all of it's axis.
So, I am reading the rotation of all the axis.
What should I do in order to validate for example that the user has "flipped" his device over the X-axis? I need to check the value, and see that they contain 0, 90, 180 and 270 degrees in a loop.
Here is part of my code:
void Update () {
float X = Input.acceleration.x;
float Y = Input.acceleration.y;
float Z = Input.acceleration.z;
xText.text = ((Mathf.Atan2(Y, Z) * 180 / Mathf.PI)+180).ToString();
yText.text = ((Mathf.Atan2(X, Z) * 180 / Mathf.PI)+180).ToString();
zText.text = ((Mathf.Atan2(X, Y) * 180 / Mathf.PI)+180).ToString();
}
The accelerometer only tells you if the acceleration of the device changes. So you will have values if the device started moving, or stopped moving. You can't retrieve its orientation from that.
Instead you need to use the gyroscope of the device. Most device have one nowadays.
Fortunately, Unity supports the gyroscope through the Gyroscope class
Simply using
Input.gyro.attitude
Will give you the orientation of the device in space, in the form of a quaternion.
To check the angles, use the eulerAngles function, for instance, is the device flipped in the x axis:
Vector3 angles = Input.gyro.attitude.eulerAngles;
bool xFlipped = angles.x > 180;
Be careful, you might have to invert some values if you want to apply the rotation in Unity (because it depend which orientation the devices uses for positive values, left or right)
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
Here is the full example from the doc (Unity version 2017.3), in case the link above is broken. It shows how to read value from the gyroscope, and apply them to an object in Unity.
// Create a cube with camera vector names on the faces.
// Allow the device to show named faces as it is oriented.
using UnityEngine;
public class ExampleScript : MonoBehaviour
{
// Faces for 6 sides of the cube
private GameObject[] quads = new GameObject[6];
// Textures for each quad, should be +X, +Y etc
// with appropriate colors, red, green, blue, etc
public Texture[] labels;
void Start()
{
// make camera solid colour and based at the origin
GetComponent<Camera>().backgroundColor = new Color(49.0f / 255.0f, 77.0f / 255.0f, 121.0f / 255.0f);
GetComponent<Camera>().transform.position = new Vector3(0, 0, 0);
GetComponent<Camera>().clearFlags = CameraClearFlags.SolidColor;
// create the six quads forming the sides of a cube
GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
quads[0] = createQuad(quad, new Vector3(1, 0, 0), new Vector3(0, 90, 0), "plus x",
new Color(0.90f, 0.10f, 0.10f, 1), labels[0]);
quads[1] = createQuad(quad, new Vector3(0, 1, 0), new Vector3(-90, 0, 0), "plus y",
new Color(0.10f, 0.90f, 0.10f, 1), labels[1]);
quads[2] = createQuad(quad, new Vector3(0, 0, 1), new Vector3(0, 0, 0), "plus z",
new Color(0.10f, 0.10f, 0.90f, 1), labels[2]);
quads[3] = createQuad(quad, new Vector3(-1, 0, 0), new Vector3(0, -90, 0), "neg x",
new Color(0.90f, 0.50f, 0.50f, 1), labels[3]);
quads[4] = createQuad(quad, new Vector3(0, -1, 0), new Vector3(90, 0, 0), "neg y",
new Color(0.50f, 0.90f, 0.50f, 1), labels[4]);
quads[5] = createQuad(quad, new Vector3(0, 0, -1), new Vector3(0, 180, 0), "neg z",
new Color(0.50f, 0.50f, 0.90f, 1), labels[5]);
GameObject.Destroy(quad);
}
// make a quad for one side of the cube
GameObject createQuad(GameObject quad, Vector3 pos, Vector3 rot, string name, Color col, Texture t)
{
Quaternion quat = Quaternion.Euler(rot);
GameObject GO = Instantiate(quad, pos, quat);
GO.name = name;
GO.GetComponent<Renderer>().material.color = col;
GO.GetComponent<Renderer>().material.mainTexture = t;
GO.transform.localScale += new Vector3(0.25f, 0.25f, 0.25f);
return GO;
}
protected void Update()
{
GyroModifyCamera();
}
protected void OnGUI()
{
GUI.skin.label.fontSize = Screen.width / 40;
GUILayout.Label("Orientation: " + Screen.orientation);
GUILayout.Label("input.gyro.attitude: " + Input.gyro.attitude);
GUILayout.Label("iphone width/font: " + Screen.width + " : " + GUI.skin.label.fontSize);
}
/********************************************/
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
}

Shadow is not on the correct position on oblique objects

At the beginning, the ball rolls on a flat object, after that, it rolls on an oblique object. On the flat object, the ball´s shadow is drawn on the correct position, but on oblique objects, the ball´s shadow is always on the wrong position. The shadow should be always in the middle under the ball.
When the ball is on the ground(flat object), then ShadowRotation = 0f.
When the ball is on the obstacle(oblique object), then ShadowRotation = -0.24f because the ascent of the obstacle is -0.24f.
The variable ShadowRotation is the only variable that I change when the ball is on an oblique object.
Why is the shadow not on the correct position when the ball rolls over an oblique object? Could anyone explain me what I´m doing wrong?
Texture2D ShadowSprite, BallSprite, GroundSprite, ObstacleSprite;
float ShadowRotation;
spriteBatch.Draw(GroundSprite, new Vector2(GroundPosition.X, GroundPosition.Y), null, Color.White, 0, new Vector2(GroundSprite.Width / 2.0f, GroundSprite.Height / 2.0f), 1f, SpriteEffects.None, 0.10f);
spriteBatch.Draw(ObstacleSprite, new Vector2(ObstaclePosition.X, ObstaclePosition.Y), null, Color.White, 0, new Vector2(ObstacleSprite.Width / 2.0f, ObstacleSprite.Height / 2.0f), 1f, SpriteEffects.None, 0.09f);
spriteBatch.Draw(ShadowSprite, new Vector2(BallPosition.X, BallPosition.Y + BallSprite.Height / 2.0f), null, Color.White, ShadowRotation, new Vector2(ShadowSprite.Width / 2.0f, ShadowSprite.Height / 2.0f), 1f, SpriteEffects.None, 0.08f);
spriteBatch.Draw(BallSprite, new Vector2(BallPosition.X, BallPosition.Y), null, Color.White, BallRotation, new Vector2(BallSprite.Width / 2.0f, BallSprite.Height / 2.0f), 1f, SpriteEffects.None, 0.07f);

Categories

Resources