Modern OpenGL: Object picking (C#, OpenTK) - c#

I am trying to implement object picking in OpenGL using C# and OpenTK. I have written a class for this purpose based on two sources:
OpenGL ray casting (picking): account for object's transform
https://www.bfilipek.com/2012/06/select-mouse-opengl.html
Currently my code is only for calculating the distance of the mouse pointer from an arbitrary test coordinate of (0,0,0), but once working it would not take much to iterate through objects in a scene to find a match.
The method is to define a ray underneath the mouse pointer between the near and far clipping planes. Then find the point on that ray which is closest to the point being tested and return the distance between the two. This should be zero when the mouse pointer is directly over (0,0,0) and increase as it moves away in any direction.
Can anyone help troubleshoot this? It executes without errors but the distance being returned clearly isn't correct. I understand the principles but not the finer points of the calculations.
Although I have found various examples online which almost do it, they are generally in a different language or framework and/or use deprecated methods and/or are incomplete or not working.
public class ObjectPicker{
public static float DistanceFromPoint(Point mouseLocation, Vector3 testPoint, Matrix4 modelView, Matrix4 projection)
{
Vector3 near = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 0), modelView, projection); // start of ray
Vector3 far = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 1), modelView, projection); // end of ray
Vector3 pt = ClosestPoint(near, far, testPoint); // find point on ray which is closest to test point
return Vector3.Distance(pt, testPoint); // return the distance
}
private static Vector3 ClosestPoint(Vector3 A, Vector3 B, Vector3 P)
{
Vector3 AB = B - A;
float ab_square = Vector3.Dot(AB, AB);
Vector3 AP = P - A;
float ap_dot_ab = Vector3.Dot(AP, AB);
// t is a projection param when we project vector AP onto AB
float t = ap_dot_ab / ab_square;
// calculate the closest point
Vector3 Q = A + Vector3.Multiply(AB, t);
return Q;
}
private static Vector3 UnProject(Vector3 screen, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(modelView) * projection );
Vector3 pos_out = new Vector3(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
}
It is called like this:
private void GlControl1_MouseMove(object sender, MouseEventArgs e)
{
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model, projection);
this.Text = dist.ToString(); // display in window caption for debugging
}
I know how the matrices are being passed in (as per above code). I'm fairly sure that the contents of those matrices must be correct, since the rendering works fine, and I can rotate/zoom successfully. This is the vertex shader FWIW:
string vertexShaderSource =
"# version 330 core\n" +
"layout(location = 0) in vec3 aPos;" +
"layout(location = 1) in vec3 aNormal;" +
"uniform mat4 model; " +
"uniform mat4 view;" +
"uniform mat4 projection;" +
"out vec3 FragPos;" +
"out vec3 Normal;" +
"void main()" +
"{" +
"gl_Position = projection * view * model * vec4(aPos, 1.0);" +
"FragPos = vec3(model * vec4(aPos, 1.0));" +
"Normal = vec3(model * vec4(aNormal, 1.0))";
"}";
I use an implementation of Arcball for rotation. Zooming is done using a translation, like this:
private void glControl1_MouseWheel(object sender, System.Windows.Forms.MouseEventArgs e)
{
zoom += (float)e.Delta / 240;
view = Matrix4.CreateTranslation(0.0f, 0.0f, zoom);
SetMatrix4(Handle, "view", view);
glControl1.Invalidate();
}

Each vertex coordinate is transformed by the model view matrix. This transforms the coordinates from model space to view space. Then each vertex coordinate is transformed by the projection matrix. This transforms from view space to clip space. The perspective divide converts a clip space coordinate to normalized device space.
If you want to convert from normalized device space to model space you have to do the reverse operations. That means you have to transform by the inverse projection matrix and the inverse model view matrix:
Vector4 pos2 = Vector4.Transform(pos, Matrix4.Invert(projection) * Matrix4.Invert(modelView));
respectively
Vector4 pos2 = Vector4.Transform(pos, Matrix4.Invert(modelView * projection));
Note, that OpenTK matrices have to be multiplied from the left to the right. See the answer to OpenGL 4.2 LookAt matrix only works with -z value for eye position.

Answering my own question here so that I can post the working code for the benefit of other users, but at least half the answer was provided by Rabbid76, whose help I am very grateful for.
There were two errors in my original code:
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(modelView) * projection );
where the two matrixes were multiplied in the wrong order, and the projection matrix was not inverted.
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model, projection);
where I passed in the model matrix not the modelview matrix (which is the product of the model and view matrices).
This works:
private void GlControl1_MouseMove(object sender, MouseEventArgs e)
{
float dist = ObjectPicker.DistanceFromPoint(new Point(e.X,e.Y), new Vector3(0,0,0), model * view, projection);
// do something with the result
}
public class ObjectPicker{
public static float DistanceFromPoint(Point mouseLocation, Vector3 testPoint, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector3 near = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 0), modelView, projection); // start of ray (near plane)
Vector3 far = UnProject(new Vector3(mouseLocation.X, mouseLocation.Y, 1), modelView, projection); // end of ray (far plane)
Vector3 pt = ClosestPoint(near, far, testPoint); // find point on ray which is closest to test point
return Vector3.Distance(pt, testPoint); // return the distance
}
private static Vector3 ClosestPoint(Vector3 A, Vector3 B, Vector3 P)
{
Vector3 AB = B - A;
float ab_square = Vector3.Dot(AB, AB);
Vector3 AP = P - A;
float ap_dot_ab = Vector3.Dot(AP, AB);
// t is a projection param when we project vector AP onto AB
float t = ap_dot_ab / ab_square;
// calculate the closest point
Vector3 Q = A + Vector3.Multiply(AB, t);
return Q;
}
private static Vector3 UnProject(Vector3 screen, Matrix4 modelView, Matrix4 projection)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4 pos2 = Vector4.Transform( pos, Matrix4.Invert(projection) * Matrix4.Invert(modelView) );
Vector3 pos_out = new Vector3(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
}
Since posting this question I have learned that the method is generally called ray casting, and have found a couple of excellent explanations of it:
Mouse Picking with Ray Casting by Anton Gerdelan
OpenGL 3D Game Tutorial 29: Mouse Picking by ThinMatrix

Related

Calculate 3D rotation of a plane given 3 points and their relative position

I'm trying to rotate a plane in 3D in Unity, given three points. These points are not always the same, but they are always on the red dots in this image
I know the absolute positions of these points, along with their position relative to the plane. I might for example know that point (-5, 5) is located on (10, -4, 13).
I know that when the three points are on a line (e.g. (-5, 5), (0, 5) and (5, 5)) it's not possible to calculate the complete rotation, so I already throw an exception when this is the case. However, when the three points are not on a single line, it should be possible to calculate the complete rotation.
So far, I've used the code below to rotate, but this misses the rotation around the y-axis (the example uses point (-5, 5), (5, 5) and (-5, -5).
Vector3 p1 = Point1.transform.position;// point 1 absolute position
Vector3 p1Relative = new Vector3(-5, 0, 5);
Vector3 p2 = Point2.transform.position;// point 2 absolute position
Vector3 p2Relative = new Vector3(5, 0, 5);
Vector3 p3 = Point3.transform.position;// point 3 absolute position
Vector3 p3Relative = new Vector3(-5, 0, -5);
Gizmos.DrawSphere(p1, .1f);
Gizmos.DrawSphere(p2, .1f);
Gizmos.DrawSphere(p3, .1f);
Vector3 normal = Vector3.Cross(p2 - p1, p3- p1);
rotator.transform.up = normal;
How would I expand or change this code to include the rotation around the Y-axis? Thank you in advance.
This answer uses this method described by robjohn on math stack exchange.
You can solve for a couple Matrix4x4s that you can use to transform from the first coordinate system to the 2nd, then use that to calculate both the origin (central point) and the rotation to align the axes (using Quaternion.LookRotation);
public class test : MonoBehaviour
{
[SerializeField] Transform point1;
[SerializeField] Transform point2;
[SerializeField] Transform point3;
[SerializeField] Vector2 relativePos1;
[SerializeField] Vector2 relativePos2;
[SerializeField] Vector2 relativePos3;
[SerializeField] Transform demonstrator;
void Update()
{
Vector3 absolutePos1 = point1.position;
Vector3 absolutePos2 = point2.position;
Vector3 absolutePos3 = point3.position;
Vector3 relativePos4 = (Vector3)relativePos1 +
Vector3.Cross(relativePos2 - relativePos1,
relativePos3 - relativePos1);
Vector3 absolutePos4 = absolutePos1 +
Vector3.Cross(absolutePos2 - absolutePos1,
absolutePos3 - absolutePos1);
Vector4 rightColumn = new Vector4(0, 0, 0, 1);
Matrix4x4 P = new Matrix4x4(relativePos2 - relativePos1,
relativePos3 - relativePos1,
relativePos4 - (Vector3)relativePos1, rightColumn);
Matrix4x4 Q = new Matrix4x4(absolutePos2 - absolutePos1,
absolutePos3 - absolutePos1,
absolutePos4 - absolutePos1, rightColumn);
Vector3 originPos = TransformPos(P, Q, Vector3.zero);
// up in source is forward in output
Vector3 forwardPos = TransformPos(P, Q, Vector3.up);
// back in source is up in output
Vector3 upPos = TransformPos(P, Q, Vector3.back);
demonstrator.position = originPos;
demonstrator.rotation = Quaternion.LookRotation(forwardPos
- originPos, upPos - originPos);
}
Vector3 TransformPos(Matrix4x4 P, Matrix4x4 Q, Vector3 input)
{
Matrix4x4 invP = P.inverse;
return Q * invP * input + ((Vector4)point1.position
- Q * invP * relativePos1);
}
}

Moving Object along a vector in sinusoidal motion

Any tips on how to move an object back and forth sinusoidally (like a pendulum, but in a linear path) along a specified 3D vector? I've got the sinusoidal motion and the vector, but I can't figure out how to combine the two.
The following are the two pieces of code I have; the vector is specified using angles from the origin.
I'm very new to coding, so please forgive me for any mistakes in the code.
This moves the object in the sinusoidal path about the origin - this is the motion I want to achieve along the 3D vector.
float rodPositionZsin = pathLength * Mathf.Sin(Time.time) + position;
transform.position = new Vector3(0, 0, rodPositionZsin);
This will move the object along the vector in the X and Y dimensions, but I'm stumped for what to do in the Z.
float Xangle = 20;
float Yangle = 50;
float Zangle = 30;
//Position Transformations
float rodPositionZsin = pathLength * Mathf.Sin(Time.time) + position;
float rodPositionY = Mathf.Cos(Yangle*Mathf.PI/180)*pathLength;
float rodPositionX = Mathf.Sin(Xangle * Mathf.PI / 180)*pathLength;
float rodPositionZ = Mathf.Tan(Zangle * Mathf.PI / 180) * pathLength;
transform.position = Vector2.MoveTowards(transform.position, new Vector2(rodPositionX, rodPositionY), pathLength * Mathf.Sin(Time.time));
rodPositionX = transform.position.x;
rodPositionY = transform.position.y;
rodPositionZ = rodPositionZsin + transform.position.z;
transform.position = new Vector3(rodPositionX, rodPositionY, rodPositionZsin);
if you have a vector, you just need to scale it by a sine curve, then set the object's position to that scaled vector.
so in (untested) pseudocode:
Vector3 scaledVector = originalVector* Mathf.Sin(Time.time);
youGameObject.transform.position = scaledVector
You can then add phase, frequency, and amplitude terms in your sine function to change the frequency of oscillation, how far along that vector, and the start position of the oscillation if you want to customize it further.
Edit:
Here’s how to add these.
http://jwilson.coe.uga.edu/EMT668/EMT668.Folders.F97/Feller/sine/assmt1.html
a * sin(b*x +c) + offset.
Where a is amplitude (max distance travelled)
B is wavelength (1/frequency of oscillation)
C is phase ( starting pos) and offset is to move the whole oscillation pattern along the vector ( make it happen away from origin center)

Is there a way to transport RectTransform into camera ViewPort in unity3d?

Is there a way to transport RectTransform into camera ViewPort in unity3d ?
I used to try do it several times but it has no result.
I want to make viewport of camera exactly in bounds of proper recttransform.
There is my code:
public static Rect RectTransformToCameraViewport(RectTransform rectTransform)
{
float leftDownCornerX = (rectTransform.anchoredPosition.x - rectTransform.sizeDelta.x / 2);
float leftDownCornerY = (rectTransform.anchoredPosition.y - rectTransform.sizeDelta.y / 2);
Vector3 leftCorner = new Vector3(leftDownCornerX, leftDownCornerY, 0);
Vector3 viewPortLeftCorner = new Vector3(leftCorner.x / Screen.width, leftCorner.y / Screen.height, 0);
float viewportWidth = Mathf.Abs(rectTransform.sizeDelta.x / Screen.width);
float viewportHeight = Mathf.Abs(rectTransform.sizeDelta.y / Screen.height);
return new Rect(0.5f + viewPortLeftCorner.x, 0.5f + viewPortLeftCorner.y, viewportWidth, viewportHeight);
}
But as I said before it does not work.
Edit 1:
It works, but it is not work on two diff machines where I work. Maybe there is something with one of those rects.
Maybe you already solved this problem ?
I had this issue in one if my previous projects where the camera needed to move to world space UI canvases. Here is my implementation:
private static void CalculateCameraTransform(Canvas canvas, float fov, out Vector3 pos, out Quaternion rot)
{
var rectTransform = canvas.GetComponent<RectTransform> ();
var corners = new Vector3[4]; rectTransform.GetWorldCorners(corners);
var normal = GetNormalPlane(corners[0], corners[1], corners[2]);
var distance = Mathf.Cos(fov * 0.5f * Mathf.Deg2Rad) * Vector3.Distance(corners[0], corners[2]) * 0.5f;
pos = canvas.transform.position + normal * distance;
rot = Quaternion.LookRotation(-normal);
}
private static Vector3 GetNormalPlane(Vector3 a, Vector3 b, Vector3 c)
{
var dir = Vector3.Cross(b - a, c - a);
var norm = Vector3.Normalize(dir);
return norm;
}
}

control a spaceship in 3 axes in XNA

when i control the spaceship in 1 axis all is fine, that is, the depth(z) and the rotation on the z plane 360 degrees, so thats 2 axis. I also have a camera right behind it which i have to maintain its position. When the 3rd comes in place all go bad. let me show you some code:
Here is the part that fails:
draw method of spaceship :
public void Draw(Matrix view, Matrix projection)
{
public float ForwardDirection { get; set; }
public float VerticalDirection { get; set; }
Matrix[] transforms = new Matrix[Model.Bones.Count];
Model.CopyAbsoluteBoneTransformsTo(transforms);
Matrix worldMatrix = Matrix.Identity;
Matrix worldMatrix2 = Matrix.Identity;
Matrix rotationYMatrix = Matrix.CreateRotationY(ForwardDirection);
Matrix rotationXMatrix = Matrix.CreateRotationX(VerticalDirection); // me
Matrix translateMatrix = Matrix.CreateTranslation(Position);
worldMatrix = rotationYMatrix * translateMatrix;
worldMatrix2 = rotationXMatrix * translateMatrix;
//worldMatrix*= rotationXMatrix;
foreach (ModelMesh mesh in Model.Meshes) //NEED TO FIX THIS
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World =
worldMatrix * transforms[mesh.ParentBone.Index]; ; //position
//effect.World =
//worldMatrix2 * transforms[mesh.ParentBone.Index]; ; //position
effect.View = view; //camera
effect.Projection = projection; //2d to 3d
effect.EnableDefaultLighting();
effect.PreferPerPixelLighting = true;
}
mesh.Draw();
}
}
For the extra axis the worldMatrix2 is implemented which i DONT know how to combine with the other axis. Do i multiply it? Also:
The camera update method has a similar problem:
public void Update(float avatarYaw,float avatarXaw, Vector3 position, float aspectRatio)
{
//Matrix rotationMatrix = Matrix.CreateRotationY(avatarYaw);
Matrix rotationMatrix2 = Matrix.CreateRotationX(avatarXaw);
//Vector3 transformedheadOffset =
//Vector3.Transform(AvatarHeadOffset, rotationMatrix);
Vector3 transformedheadOffset2 = Vector3.Transform(AvatarHeadOffset, rotationMatrix2);
//Vector3 transformedheadOffset2 = Vector3.Transform(transformedheadOffset, rotationMatrix2);
//Vector3 transformedReference =
//Vector3.Transform(TargetOffset, rotationMatrix);
Vector3 transformedReference2 = Vector3.Transform(TargetOffset, rotationMatrix2);
//Vector3 transformedReference2 = Vector3.Transform(transformedReference, rotationMatrix2);
Vector3 cameraPosition = position + transformedheadOffset2; /** + transformedheadOffset; */
Vector3 cameraTarget = position + transformedReference2; /** + transformedReference; */
//Calculate the camera's view and projection
//matrices based on current values.
ViewMatrix =
Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
ProjectionMatrix =
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(GameConstants.ViewAngle), aspectRatio,
GameConstants.NearClip, GameConstants.FarClip);
}
}
Lastly here is the method of the Game class Update:
spaceship.Update(currentGamePadState,
currentKeyboardState); // this will be cahnged when asteroids are placed in the game, by adding a new parameter with asteroids.
float aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio;
gameCamera.Update(spaceship.ForwardDirection,spaceship.VerticalDirection,
spaceship.Position, aspectRatio);
Somewhere in your code, you probably have methodology that manipulates and sets the variables 'ForwardDirection' & 'VerticalDirection' which seem to represent angular values around the Y & X axis respectively. Presumably you may intend to have a variable that represents an angular value around the Z axis as well. I'm also assuming (and your code implies) that these variables are ultimately how you store your spaceship's orientation from frame to frame.
As long as you continue to try to represent an orientation using these angles, you will find it difficult to achieve the control of your spaceship.
There are several types of orientation representations available. The 3 angles approach has inherent weaknesses when it comes to combining rotations in 3 dimensions.
My recommendation is that you shift paradigms and consider storing and manipulating your orientations in a Matrix or a Quaternion form. Once you learn to manipulate a matrix or quaternion, what you are trying to do becomes almost unbelievably easy.

Direct3D 11 WorldViewProjection Matrix Transformation Not Working

I have a simple square I'm drawing in 3D space using Direct3D 11 and SlimDX, with the following coordinates (I know it renders)
0,0,0.5
0,0.5,0.5
0.5,0.5,0.5
0.5,0,0.5
I have a camera class that handles camera movement by applying matrix transformations to the viewmatrix. e.g. Camera.MoveForward(float x) moves the camera forward by x. It also holds the View and Projection matrices. I instantiate them using the following code:
Matrix view = Matrix.LookAtLH(
new Vector3(0f, 0f, -5f),
new Vector3(0f, 0f, 0f),
new Vector3(0f, 1f, 0f));
Matrix projection = Matrix.PerspectiveFovLH(
(float)Math.PI/2,
WINDOWWIDTH/WINDOWHEIGHT,
0.1f,
110f);
The world matrix is set to the identity matrix.
In my shader code I transform my coordinates using the following code:
PS_IN VS(VS_IN input)
{
PS_IN output = (PS_IN)0;
output.pos = mul( input.pos, WorldViewProjection );
output.col = float4(1,1,1,1);
return output;
}
Where WorldViewProjection is set in my code using the following:
Matrix worldview = Matrix.Multiply(world, camera.ViewMatrix);
Matrix worldviewprojection = Matrix.Multiply(worldview, camera.ProjectionMatrix);
I know the camera class's transformations are working as it's old code I wrote for an MDX application which worked fine, however when this runs I see my square, and moving forwards and backwards works just fine, but moving left and right seems to rotate the square around the Y (up) axis instead of translating it, moving up and down has a similar effect, rotating instead about the X (horizontal) axis.
For reference, the camera movement functions:
public void MoveRight(float distance)
{
position.Z += (float)(Math.Sin(-angle.Y) * -distance);
position.X += (float)(Math.Cos(-angle.Y) * -distance);
updateViewMatrix();
}
public void MoveForward(float distance)
{
position.X += (float)(Math.Sin(-angle.Y) * -distance);
position.Z += (float)(Math.Cos(-angle.Y) * -distance);
updateViewMatrix();
}
public void MoveUp(float distance)
{
position.Y -= distance;
updateViewMatrix();
}
private void updateViewMatrix()
{
ViewMatrix = Matrix.Translation(position.X, position.Y, position.Z) * Matrix.RotationY(angle.Y) * Matrix.RotationX(angle.X) * Matrix.RotationZ(angle.Z);
}
The issue may lie in the line:
output.pos = mul( input.pos, WorldViewProjection );
I would change output.pos to be a float4 and then do:
output.pos = mul( float4(input.pos, 1), WorldViewProjection );
Let me know how this works out. If this isn't the problem, I'll take a deeper look at what may be going on.

Categories

Resources