OpenTK GL.Translate 2D camera on GLControl - c#

I am doing some ascii game, via using OpenTK & Winforms. And stack with Camera movement and view in 2D.
Those code i am using for mouse event and translate positions:
public static Point convertScreenToWorldCoords(int x, int y)
{
int[] viewport = new int[4];
Matrix4 modelViewMatrix, projectionMatrix;
GL.GetFloat(GetPName.ModelviewMatrix, out modelViewMatrix);
GL.GetFloat(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
Vector2 mouse;
mouse.X = x;
mouse.Y = y;
Vector4 vector = UnProject(ref projectionMatrix, modelViewMatrix, new Size(viewport[2], viewport[3]), mouse);
Point coords = new Point((int)vector.X, (int)vector.Y);
return coords;
}
public static Vector4 UnProject(ref Matrix4 projection, Matrix4 view, Size viewport, Vector2 mouse)
{
Vector4 vec;
vec.X = 2.0f * mouse.X / (float)viewport.Width - 1;
vec.Y = 2.0f * mouse.Y / (float)viewport.Height - 1;
vec.Z = 0;
vec.W = 1.0f;
Matrix4 viewInv = Matrix4.Invert(view);
Matrix4 projInv = Matrix4.Invert(projection);
Vector4.Transform(ref vec, ref projInv, out vec);
Vector4.Transform(ref vec, ref viewInv, out vec);
if (vec.W > float.Epsilon || vec.W < float.Epsilon)
{
vec.X /= vec.W;
vec.Y /= vec.W;
vec.Z /= vec.W;
}
return vec;
}
//on mouse click event
Control control = sender as Control;
Point worldCoords = convertScreenToWorldCoords(e.X, control.ClientRectangle.Height - e.Y);
playerX = (int)Math.Floor((double)worldCoords.X / 9d);
playerY = (int)Math.Floor((double)worldCoords.Y / 9d);
And those code will setup my Projection, but, something wrong here...
//Set Projection
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, Width * charWidth * scale, Height * charHeight * scale, 0, -1, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Translate(playerX, playerY, 0);
Well, my problem is GL.Translate. In this case, they not focused on playerX,Y and movement of "camera" seems reversed. If i am put GL.Translate(-playerX, -playerY, 0); - They seems moves correct, but anyways View seems not focused on player Object (player object should be always on center position of view, typical top-down view camera). But I dont know how to setting up them correctly. My experiments, multiple, devide, etc. with my X,Y pos - does not give me correct view. How it should be in this case?

Related

Random Rotation on a 3D sphere given an angle

This question is in between computer graphic, probability, and programming, but since I am coding it for an Unity project in C# I decided to post it here. Sorry if not appropriate.
I need to solve this problem: given a object on a 3d sphere at a certain position, and given a range of degrees, sample points on the sphere uniformly within the given range.
For example:
Left picture: the cube represents the center of the sphere, the green sphere is the starting position. I want to uniformly cover all surface of the circle within a certain degree, for example from -90 to 90 degrees around the green sphere. My approach (right picture) doesn't work as it over-samples points that are close to the starting position.
My sampler:
Vector3 getRandomEulerAngles(float min, float max)
{
float degree = Random.Range(min, max);
return degree * Vector3.Normalize(new Vector3(Random.Range(min, max), Random.Range(min, max), Random.Range(min, max)));
}
and for covering the top half of the sphere I would call getRandomEulerAngles(-90, 90).
Any idea?
Try this:
public class Sphere : MonoBehaviour
{
public float Radius = 10f;
public float Angle = 90f;
private void Start()
{
for (int i = 0; i < 10000; i++)
{
var randomPosition = GetRandomPosition(Angle, Radius);
Debug.DrawLine(transform.position, randomPosition, Color.green, 100f);
}
}
private Vector3 GetRandomPosition(float angle, float radius)
{
var rotationX = Quaternion.AngleAxis(Random.Range(-angle, angle), transform.right);
var rotationZ = Quaternion.AngleAxis(Random.Range(-angle, angle), transform.forward);
var position = rotationZ * rotationX * transform.up * radius + transform.position;
return position;
}
}
We can use a uniform sphere sampling for that. Given two random variables u and v (uniformly distributed), we can calculate a random point (p, q, r) on the sphere (also uniformly distributed) with:
float azimuth = v * 2.0 * PI;
float cosDistFromZenith = 1.0 - u;
float sinDistFromZenith = sqrt(1.0 - cosDistFromZenith * cosDistFromZenith);
(p, q, r) = (cos(azimuth) * sinDistFromZenith, sin(azimuth) * sinDistFromZenith, cosDistFromZenith);
If we put our reference direction (your object location) into zenithal position, we need to sample v from [0, 1] to get all directions around the object and u in [cos(minDistance), cos(maxDistance)], where minDistance and maxDistance are the angle distances from the object you want to allow. A distance of 90° or Pi/2 will give you a hemisphere. A distance of 180° or Pi will give you the full sphere.
Now that we can sample the region around the object in zenithal position, we need to account for other object locations as well. Let the object be positioned at (ox, oy, oz), which is a unit vector describing the direction from the sphere center.
We then build a local coordinate system:
rAxis = (ox, oy, oz)
pAxis = if |ox| < 0.9 : (1, 0, 0)
else : (0, 1, 0)
qAxis = normalize(cross(rAxis, pAxis))
pAxis = cross(qAxis, rAxis)
And finally, we can get our random point (x, y, z) on the sphere surface:
(x, y, z) = p * pAxis + q * qAxis + r * rAxis
Adapted from Nice Schertler, this is the code I am using
Vector3 GetRandomAroundSphere(float angleA, float angleB, Vector3 aroundPosition)
{
Assert.IsTrue(angleA >= 0 && angleB >= 0 && angleA <= 180 && angleB <= 180, "Both angles should be[0, 180]");
var v = Random.Range(0F, 1F);
var a = Mathf.Cos(Mathf.Deg2Rad * angleA);
var b = Mathf.Cos(Mathf.Deg2Rad * angleB);
float azimuth = v * 2.0F * UnityEngine.Mathf.PI;
float cosDistFromZenith = Random.Range(Mathf.Min(a, b), Mathf.Max(a, b));
float sinDistFromZenith = UnityEngine.Mathf.Sqrt(1.0F - cosDistFromZenith * cosDistFromZenith);
Vector3 pqr = new Vector3(UnityEngine.Mathf.Cos(azimuth) * sinDistFromZenith, UnityEngine.Mathf.Sin(azimuth) * sinDistFromZenith, cosDistFromZenith);
Vector3 rAxis = aroundPosition; // Vector3.up when around zenith
Vector3 pAxis = UnityEngine.Mathf.Abs(rAxis[0]) < 0.9 ? new Vector3(1F, 0F, 0F) : new Vector3(0F, 1F, 0F);
Vector3 qAxis = Vector3.Normalize(Vector3.Cross(rAxis, pAxis));
pAxis = Vector3.Cross(qAxis, rAxis);
Vector3 position = pqr[0] * pAxis + pqr[1] * qAxis + pqr[2] * rAxis;
return position;
}

Unity - Determining UVs for a circular plane mesh generated by code

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

How do I properly setup a texture position using XNA/Monogame VertexPositionTexture on a circle

I am using the following to create a circle using VertexPositionTexture:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
/// See below
}
The texture that is applied to it doesn't look right, it spirals out from the center. I have tried some other things but nothing does it how I want. I would like for it to kind-of just fan around the circle, or start in the top-left end finish in the bottom-right. Basically wanting it to be easier to create textures for it.
I know that are MUCH easier ways to do this without using meshes, but that is not what I am trying to accomplish right now.
This is the code that ended up working thanks to Pinckerman:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
VertexPositionTexture[] vertices = new VertexPositionTexture[slices + 2];
int[] indices = new int[slices * 3];
float x = origin.X;
float y = origin.Y;
float deltaRad = MathHelper.ToRadians(360) / slices;
float delta = 0;
float thetaInc = (((float)Math.PI * 2) / vertices.Length);
vertices[0] = new VertexPositionTexture(new Vector3(x, y, 0), new Vector2(.5f, .5f));
float sliceSize = 1f / slices;
for (int i = 1; i < slices + 2; i++)
{
float newX = (float)Math.Cos(delta) * radius + x;
float newY = (float)Math.Sin(delta) * radius + y;
float textX = 0.5f + ((radius * (float)Math.Cos(delta)) / (radius * 2));
float textY = 0.5f + ((radius * (float)Math.Sin(delta)) /(radius * 2));
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(textX, textY));
delta += deltaRad;
}
indices[0] = 0;
indices[1] = 1;
for (int i = 0; i < slices; i++)
{
indices[3 * i] = 0;
indices[(3 * i) + 1] = i + 1;
indices[(3 * i) + 2] = i + 2;
}
ObjectData thisData = new ObjectData()
{
Vertices = vertices,
Indices = indices
};
return thisData;
}
public static ObjectData Ellipse()
{
ObjectData thisData = new ObjectData()
{
};
return thisData;
}
ObjectData is just a structure that contains an array of vertices & an array of indices.
Hope this helps others that may be trying to accomplish something similar.
It looks like a spiral because you've set the upper-left point for the texture Vector2(0,0) in the center of your "circle" and it's wrong. You need to set it on the top-left vertex of the top-left slice of you circle, because 0,0 of your UV map is the upper left corner of your texture.
I think you need to set (0.5, 0) for the upper vertex, (1, 0.5) for the right, (0.5, 1) for the lower and (0, 0.5) for the left, or something like this, and for the others use some trigonometry.
The center of your circle has to be Vector2(0.5, 0.5).
Regarding the trigonometry, I think you should do something like this.
The center of your circle has UV value of Vector2(0.5, 0.5), and for the others (supposing the second point of the sequence is just right to the center, having UV value of Vector2(1, 0.5)) try something like this:
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(0.5f + radius * (float)Math.Cos(delta), 0.5f - radius * (float)Math.Sin(delta)));
I've just edited your third line in the for-loop. This should give you the UV coordinates you need for each point. I hope so.

Getting Mouse Coordinates from 2D Camera and Independent Resolution

I've been trying to combine these two samples from David Amador:
http://www.david-amador.com/2010/03/xna-2d-independent-resolution-rendering/
http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/
Everything is working fine except I'm having some difficulty getting the mouse coordinates. I was able to get them for each individual sample, but my math for taking both into account seems to be wrong.
The mouse coordinates ARE correct if my virtual resolution and normal resolution are the same. It's when I do something like Resolution.SetVirtualResolution(1920, 1080)
and Resolution.SetResolution(1280, 720, false) when the coordinates slowly get out of sync as I move the camera.
Here is the code:
public static Vector2 MousePositionCamera(Camera camera)
{
float MouseWorldX = (Mouse.GetState().X - Resolution.VirtualWidth * 0.5f + (camera.position.X) * (float)Math.Pow(camera.Zoom, 1)) /
(float)Math.Pow(camera.Zoom, 1);
float MouseWorldY = ((Mouse.GetState().Y - Resolution.VirtualHeight * 0.5f + (camera.position.Y) * (float)Math.Pow(camera.Zoom, 1))) /
(float)Math.Pow(camera.Zoom, 1);
Vector2 mousePosition = new Vector2(MouseWorldX, MouseWorldY);
Vector2 virtualViewport = new Vector2(Resolution.VirtualViewportX, Resolution.VirtualViewportY);
mousePosition = Vector2.Transform(mousePosition - virtualViewport, Matrix.Invert(Resolution.getTransformationMatrix()));
return mousePosition;
}
In resolution I added this:
virtualViewportX = (_Device.PreferredBackBufferWidth / 2) - (width / 2);
virtualViewportY = (_Device.PreferredBackBufferHeight / 2) - (height / 2);
Everything else is the same as the sample code. Thanks in advance!
Thanks to David Gouveia I was able to identify the problem... My camera matrix was using the wrong math.
I'm going to post all of my code with the hopes of helping someone who is trying to do something similar.
Camera transformation matrix:
public Matrix GetTransformMatrix()
{
transform = Matrix.CreateTranslation(new Vector3(-position.X, -position.Y, 0)) * Matrix.CreateRotationZ(rotation) *
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(Resolution.VirtualWidth
* 0.5f, Resolution.VirtualHeight * 0.5f, 0));
return transform;
}
That will also center the camera. Here's how you get the mouse coordinates combining both the Resolution class and camera class:
public static Vector2 MousePositionCamera(Camera camera)
{
Vector2 mousePosition;
mousePosition.X = Mouse.GetState().X;
mousePosition.Y = Mouse.GetState().Y;
//Adjust for resolutions like 800 x 600 that are letter boxed on the Y:
mousePosition.Y -= Resolution.VirtualViewportY;
Vector2 screenPosition = Vector2.Transform(mousePosition, Matrix.Invert(Resolution.getTransformationMatrix()));
Vector2 worldPosition = Vector2.Transform(screenPosition, Matrix.Invert(camera.GetTransformMatrix()));
return worldPosition;
}
Combined with all of the other code I posted/mentioned this should be all you need to achieve resolution independence and an awesome camera!

Applying modeling matrix to view matrix = failure

I've got a problem with moving and rotating objects in OpenGL. I'm using C# and OpenTK (Mono), but I guess the problem is with me not understanding the OpenGL part, so you might be able to help me even if you don't know anything about C# / OpenTK.
I'm reading the OpenGL SuperBible (latest edition) and I tried to rewrite the GLFrame in C#. Here is the part I've already rewritten:
public class GameObject
{
protected Vector3 vLocation;
public Vector3 vUp;
protected Vector3 vForward;
public GameObject(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
vUp = Vector3.UnitY;
vForward = Vector3.UnitZ;
}
public Matrix4 GetMatrix(bool rotationOnly = false)
{
Matrix4 matrix;
Vector3 vXAxis;
Vector3.Cross(ref vUp, ref vForward, out vXAxis);
matrix = new Matrix4();
matrix.Row0 = new Vector4(vXAxis.X, vUp.X, vForward.X, vLocation.X);
matrix.Row1 = new Vector4(vXAxis.Y, vUp.Y, vForward.Y, vLocation.Y);
matrix.Row2 = new Vector4(vXAxis.Z, vUp.Z, vForward.Z, vLocation.Z);
matrix.Row3 = new Vector4(0.0f, 0.0f, 0.0f, 1.0f);
return matrix;
}
public void Move(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
}
public void RotateLocalZ(float angle)
{
Matrix4 rotMat;
// Just Rotate around the up vector
// Create a rotation matrix around my Up (Y) vector
rotMat = Matrix4.CreateFromAxisAngle(vForward, angle);
Vector3 newVect;
// Rotate forward pointing vector (inlined 3x3 transform)
newVect.X = rotMat.M11 * vUp.X + rotMat.M12 * vUp.Y + rotMat.M13 * vUp.Z;
newVect.Y = rotMat.M21 * vUp.X + rotMat.M22 * vUp.Y + rotMat.M23 * vUp.Z;
newVect.Z = rotMat.M31 * vUp.X + rotMat.M32 * vUp.Y + rotMat.M33 * vUp.Z;
vUp = newVect;
}
}
So I create a new GameObject (GLFrame) on some random coordinates: GameObject go = new GameObject(0, 0, 5); and rotate it a bit: go.RotateLocalZ(rotZ);. Then I get the matrix using Matrix4 matrix = go.GetMatrix(); and render frame (first, I set the viewing matrix and then I multiply it with modeling matrix)
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
this.Title = "FPS: " + (1 / e.Time).ToString("0.0");
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
Matrix4 modelmatrix = go.GetMatrix();
Matrix4 viewmatrix = Matrix4.LookAt(new Vector3(5, 5, -10), Vector3.Zero, Vector3.UnitY);
GL.LoadMatrix(ref viewmatrix);
GL.MultMatrix(ref modelmatrix);
DrawCube(new float[] { 0.5f, 0.4f, 0.5f, 0.8f });
SwapBuffers();
}
The DrawCube(float[] color) is my own method for drawing a cube.
Now the most important part: If I render the frame without the GL.MultMatrix(ref matrix); part, but using GL.Translate() and GL.Rotate(), it works (second screenshot). However, if I don't use these two methods and I pass the modeling matrix directly to OpenGL using GL.MultMatrix(), it draws something strange (first screenshot).
Can you help me and explain me where is the problem? Why does it work using translate and rotate methods, but not with multiplying the view matrix by the modeling matrix?
OpenGL transformation matrices are ordered column wise. You should use the transpose of the matrix you are using.

Categories

Resources