Unity3D MultiplyPoint matrix confusion - c#

This multiply point function
simply multiplies up to a 4*4 matrix by a vector 3 or 4.
I am trying to replicate this function using only maths as I'm using shaders.
The maths itself is nothing spectacular, just the dot product of each column by each row to get the resulting transform.
My problem is that unity is returning my transform matrix as a 2*4,
and my vector to transform is a Vector4 (Vector 3 world position with an added 1 in the fourth component)
My conclusion being that this Unity function must do some other method before than standard matrix multiplication,any thoughts on this is welcomed.

This multiply point function simply multiplies up to a 4*4 matrix by a Vector3 or Vector4.
That's actually wrong, it only multiplies by a Vector3 - check the doco
Issue: to be helpful, Unity annoyingly "casts" between Vector2, Vector3, Vector4 (in different ways, that sometimes do and sometimes don't make sense).
If you're giving it a Vector4 as the argument, you are not actually getting what you think - its' "casting" it to a Vector3 in an attempt to be helpful.
I hope, that's the problem here!
Just for the sake of any new chums googling here, in a=b.MultiplyPoint(c) we have b (that's b, not a or c) is a FOUR (i.e. a "movement") and c is a THREE (position) and the result a is a THREE (position).

Related

Unity Quaternion - get Euler angles of a component B relative to component A

I cannot get my head around Quaternions, and have tried many combination of operators on this (after reading / viewing tutorials). No success - just shows my lack of understanding. Even tried it with angles and atan2() - but all that really showed me was what gimbal lock was. The problem I am trying to solve is:
I have a component (say A, a cube) with some rotation in the world frame of reference (doesn't actually matter what that rotation is, could be orientated any direction).
Then I have another component (say B, another cube) elsewhere in the world space. Doesn't matter what that component's rotation is, either.
What I want to get to in the end are the (Euler) angles of B relative to A, BUT relative to A's frame of reference, not world space. The reason is I want to add forces to A based on B's direction from A, which I can then do with A.rigidbody.addRelativeForce(x, y, z); .
Stuck.
With thanks for the comments, I think I have it.
For reference, considering a vector and rotation within the same (world frame) coordinates, a new vector representing the original, but viewed from the perspective of local frame coordinates, being those of the original world rotation, can be calculated as:
static Vector3 createVectorForLocalRotationFrame(Vector3 worldVector, Quaternion worldRotation, bool normalise)
{
Vector3 localVector = Quaternion.Inverse(worldRotation) * worldVector;
if (normalise)
localVector.Normalize();
return localVector;
}
The normalise option is just something I needed. Don't ask me how it works, but it appears to be right.

Unity Mesh Renderer generating polygons on the wrong side

So I need to generate a polygon using the Unity Mesh component that has n number of vertices. I am using a custom Triangulate() function that is able to find the indices for the mesh.triangles.
The problem is that based on the vertices I feed that function, the polygon generates on the wrong side and sometimes is not visible to the camera (unless I flip the camera to the other side).
Now I know this has to do with Unity's clockwise winding order, but how can I make sure the polygon is always generated on the correct side, no matter the vertices I feed it? Or could there be a way to know on which side the mesh generated so I can adjust the camera accordingly?
The Triangulator function I use
This is the normal vector n of a triangle:
The visibility of the triangle is based on its normal. The math that calculates the light that hits the triangle, is reflected and goes to your eyes (Unity camera) uses the normal of the triangle for that. Basically, if that vector n points towards your eyes it means you can see [part of] the light that hits that surface.
To know if you can see the triangle, you need to know if the normal points to you. Without going further with the math, the normal is given by calculating the cross product of the vectors defined by the vertices of the triangle.
For example, a triangle A-B-C can be defined by the vectors AB and BC (not related with the figure above). Or you can invert the "direction" of this triangle and define it AC and CB. The normal of AB/BC has one direction and the normal of AC/CB has the opposite direction, because of ... math - if you google this stuff you can learn why, there are tons of tutorials.
So I wrote all this to tell something you already know: the order of the vertices defines the visibility. But that's because it defines the direction of the normal. Now take a look at this code:
var a = new Vector3(0f, 0f, 0f);
var b = new Vector3(0.5f, 0.5f, 0f);
var c = new Vector3(0f, 1f, 0f);
var ab = b - a;
var bc = c - b;
Debug.Log(Vector3.Cross(ab, bc)); // this prints (0.0, 0.0, 0.5)
var ac = c - a;
var cb = b - c;
Debug.Log(Vector3.Cross(ac, cb)); // this prints (0.0, 0.0, -0.5)
Notice that on the second case z is negative, so it's pointing towards you (your camera is probably set at (0, 0, -10) or something similar). So if you define the triangle that way you will be able to see it.
Long story short: to know if you can see a triangle, test the signal of the z component of the cross product of the vertices. If the result is not the signal you want, reverse the vertices.
I didn't read the code from your triangulator function, but I saw that it's using 2D. That means it can probably be simplified further. It also seems to be calculating the cross product in InsideTriangle(), so you can probably use the calculations that's already going on there to check the signal, with [almost] zero performance loss.

How to find The Rotation of a 3d Triangle

I need a function so that, when given the Vector3 for a, b and c, will give me a new Vector3, the rotation of the Triangle. Pretty much, for point d, if I want to move it out, adjacent to the triangle, I just have to multiply the distance I want to move it by the Vector3 rotation, and add the old position to get the new Location.
The vector you want is called the unit normal vector. "Unit" means the length is 1 (so that you can just multiply by distance), and "normal" is the name of the vector that's perpendicular to a surface.
To get it, take the cross-product of any two edges of your triangle, and normalize the result. Look at this question for details on how to do this mathematically.
Note: "Normalizing" a vector means to keep the direction the same, but change the length to 1. It doesn't directly relate to a "normal vector".

Finding angle between two markers for use in mathematical optimisation

I am trying to minimize the difference between sets of square markers in 3d space with a set of unknown parameters.
I have a model set of these square markers (represented by 3d position and rotation) which should at the end of optimization match up with a set of observed square markers.
I am using Levenberg–Marquardt to optimize the set of unknown parameters, these parameters will alter the position and rotation of the model 3d markers until they match (more or less) with the observed 3d marker positions.
The observed 3d markers come from a computer vision marker detection algorithm. It gives the id of the markers seen in each frame and the transformation from the camera of each marker (using Coplanar posit). Each 'frame' would only be able to see a small number of markers in the total set of markers, there will also be inaccuracies in the transformation.
I have thought of how to construct my minimization function and I thought to try to compare the relative rotations and minimize the difference between the rotations in each iteration of the LM optimisation.
Essentially:
foreach (Marker m1 in markers)
{
foreach (Marker m2 in markers)
{
Vector3 eulerRotation = getRotation(m1, m2);
ObservedMarker observed1 = getMatchingObserved(m1);
ObservedMarker observed2 = getMatchingObserved(m2);
Vector3 eulerRotationObserved = getRotation(observed1, observed2);
double diffX = Math.Abs(eulerRotation.X - eulerRotationObserved.X);
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
}
}
Where diffX, diffY and diffZ are the values to be minimized.
I am using the following to calculate the angles:
Vector3 axis = Vector3.Cross(getNormal(m1), getNormal(m2));
axis.Normalize();
double angle = Math.Acos(Vector3.Dot(getNormal(m1), getNormal(m2)));
Vector3 modelRotation = calculateEulerAngle(axis, angle);
getNormal(Marker m) calculates the normal to the plane that the square marker lies on.
I am sure I am doing something wrong here though. Throwing this all into the LM optimiser (I am using ALGLib) doesn't seem to do anything, it goes through 1 iteration and finishes without changing any of the unknown parameters (initially all 0).
I am thinking that something is wrong with the function I am trying to minimize over. It seems sometimes the angle calculated (3rd line) returns NaN (I am currently setting this case to return diffX, diffY, diffZ as 0). Is it even valid to compare the euler angles as above?
Any help would be greatly appreciated.
Further information:
Program is written in C#, I am using XNA as well.
The model markers are represented by its four corners in 3D coords
All the model markers are in the same coordinate space.
Observed markers are the four corners as translations from the camera position in camera coordinate space
If m1 and m2 markers are the same marker id or if either m1 or m2 is not observed, I set all the diffs to 0 (no difference).
At first I thought this might be a typo, but then I realized that this could be a bug, having been a victim of similar cases myself in the past.
Shouldn't diffY and diffZ be:
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
I don't have enough reputation to post this as a comment, hence posting it as an answer!
Any luck with this? Is it correct to assume that you want to minimize the "sum" of all diffs over all marker combinations? I think if you want to use LM you should not use Math.Abs.
One alternative would be to formulate your objective function manually and use another optimizer. I have recently ported two non-linear optimizers to C# which do not even require you to compute derivatives:
COBYLA2, supports non-linear constraints but require more iterations.
BOBYQA, limited to variable bounds constraints, but provides a considerable more efficient iteration scheme.

Distance to a plane

I've written a simple little helper method whoch calculates the distance from a point to a plane. However, it seems to be returning nonsensical results. The code i have for creating a plane is thus:
Plane = new Plane(vertices.First().Position, vertices.Skip(1).First().Position, vertices.Skip(2).First().Position);
Fairly simple, I hope you'll agree. It creates an XNA plane structure using three points.
Now, immediately after this I do:
foreach (var v in vertices)
{
float d = Math.Abs(v.ComputeDistance(Plane));
if (d > Constants.TOLERANCE)
throw new ArgumentException("all points in a polygon must share a common plane");
}
Using the same set of vertices I used to construct the plane, I get that exception thrown! Mathematically this is impossible, since those three points must lie on the plane.
My ComputeDistance method is:
public static float ComputeDistance(this Vector3 point, Plane plane)
{
float dot = Vector3.Dot(plane.Normal, point);
float value = dot - plane.D;
return value;
}
AsI understand it, this is correct. So what could I be doing wrong? Or might I be encountering a bug in the implementation of XNA?
Some example data:
Points:
{X:0 Y:-0.5000001 Z:0.8660254}
{X:0.75 Y:-0.5000001 Z:-0.4330128}
{X:-0.75 Y:-0.5000001 Z:-0.4330126}
Plane created:
{Normal:{X:0 Y:0.9999999 Z:0} D:0.5} //I believe D should equal -0.5?
Distance from point 1 to plane:
1.0
It seems that your Plane is implemented so that D is not the projection of one of your points onto the plane normal, but rather the negative of this. You can think of this as projecting a vector from the plane to the origin onto the normal.
In any case, I believe that changing
float value = dot - plane.D;
to
float value = dot + plane.D;
should fix things. HTH.
Ok, I'm not totally sure I understand the math here, but I suspect (based on formulas from http://mathworld.wolfram.com/Point-PlaneDistance.html among others) that
float value = dot - plane.D;
should actually be
float value = dot / plane.D;
EDIT: Ok, as mentioned in comments below, this didn't work. My best suggestion then is to go look at the link or google "distance between a point and a plane" and try implementing the formula a different way.

Categories

Resources