I am attempting to fuse skeletons from two separate Kinects.
From first Kinect I have skeleton1, from it I choose 4 points corresponding with 4 of it's joints. With those 4 points I can construct a plane on which they all are. The plane coefficients Ax + By + Cz = D are known. As I understand the planes Quaternion would be Q = (D; A; B; C)
From the second Kinect I have the same data, but in second Kinects coordinate system.
How can I rotate the plane from second Kinect so that it would have the same orientation as the plane from the first Kinect?
Unfortunately, quaternions don't work that way. They do not represent a plane, and cannot be created just by re-arranging the components of a plane equation (plane equation has two degrees of freedom, while quaternion is a non-parametric representation of rotation/orientation).
However, if you are looking for a rotation between two normal vectors (one for each plane), which will transform one plane to another, it is a relatively simple task:
if the two normal vectors are the same, there is no rotation needed and the result is identity,
if the two vectors are parallel (their dotproduct is zero) but not identical, the transformation between them is a rotation of 180 degrees around an arbitrary axis perpendicular to them,
in all other cases, you can compute the axis of rotation using a crossproduct (and normalization), and the angle using arc cosine of their dotproduct.
If you still need to create a quaternion out of them, you can do it with a bit of trigonometry.
Related
I'm wondering if anyone knows how to find the smallest distance from a 3D line segment or ray to a 3D cubic-bezier curve or if there is anything built into the Unity game engine.
Calculate affine transformation that transform ray base point into (0,0,0) and direction becomes OX
If ray is defined by base point (rx0, ry0, rz0) and direction vector (dx, dy, dz) with length len (1 for normalized), then matrix of this transform is product of:
shift by (-rx0, -ry0, -rz0) -
then rotation around OZ axis by atan2(ry, rx)
then rotation around OY axis by acos(dz / len)
Apply this transform to Bezier curve contol points
Calculate minimal distance from curve to OX axis using zer0derivative approach (derivative is zero for max and min of function):
Write expression for squared distance depending of t:
SquaredDist(t) = by'(t)^2 + bz'(t)^2
Calculate derivative of SquaredDist by dt, make equation
SquaredDist' = 0
and solve it against t.
Equation is 5-th order polynomial equation, so no analytical, only numerical solution (or subdivision approach).
Check roots in 0..1 interval, also check Bezier curve ends and ray start point distance separately.
I am using kinect v2. I want to turn kinect skeleton coordinates into real life measurements. I have read here: Understanding Kinect V2 Joints and Coordinate System
As I understand, If the coordinates are 0.3124103X,0.5384778Y,2.244482Z, that means I am 0.3 meters left, 0.5 meters above and 2.24 meters in front of the sensor. These coordinates are the coordinates of my head and sensor is 0.5 meter above the ground. My height is 1 meters? or Am I doing something wrong? Is there an optimal position or height to calculate it better? Maybe there is a different method to calculate it? Anybody knows how to do it? Thank you :)
You need to account for the tilt of the sensor.
In your example, your calculation is correct if the sensor is facing exactly forward. If the Kinect is tilted up, your height would be higher.
You can calculate the tilt of the sensor and the height of the sensor using BodyFrame.FloorClipPlane.
Then you need to transform the joint coordinates from Kinect's camera space to the real-world xyz coordinates.
See the marked answer at this post "FloorClipPlane & Joint Data correlation" by Eddy Escardo-Raffo [MSFT]
What you need to do is a coordinate transform from the cartesian space
defined by the basis vectors of the Kinect's point of view (let's call
them KV) into the cartesian space defined by the desired basis vectors
(let's call these DV).
When camera is not tilted at all, KV and DV are exactly the same so,
since this is the desired vector space, for simplicity we can use the
standard unit vectors to represent the axes:
x: [1, 0, 0]
y: [0, 1, 0]
z: [0, 0, 1]
Now, when you tilt camera upwards by an angle A, x axis stays the same
but y-z plane rotates by A (i.e.: it corresponds exactly to a
counter-clockwise rotation about X axis), so the basis vectors for KV
(in terms of the basis vectors for DV) are now
x: [1, 0, 0]
y: [0, cos A, -sin A]
z: [0, sin A, cos A]
to convert coordinates relative to KV into coordinates relative to DV,
you have to perform a matrix multiplication between the transformation
matrix defined by these basis vectors for KV
(http://en.wikipedia.org/wiki/Transformation_matrix) and a joint
position vector that you receive from kinect API. This will result in
a joint position vector relative to DV.
You may also find this answer helpful: Sergio's answer to "Transform world space using Kinect FloorClipPlane to move origin to floor while keeping orientation"
I have to find the axis and angle of rotation of a camera with an UP and Direction vector(They both are perpendicular to each other). I have the initial and final positions of the UP and direction vectors of the camera that is rotated. I want to find the axis and angle of the rotation for the camera. I am using C# for my project. I am new to this 3D rotation. So pardon my questions if you find them silly.
From the direction (forward) vector f and up vector u you can get the side vector s by performing a vector cross product (s = f x u). All three vectors are now orthogonal. You should also make them orthonormal by normalizing each one of them. Taken together, these vectors form an orthonormal basis.
You now have two such basis: the one from your initial camera orientation and the one from your final camera orientation. Both basis can be represented as a rotation matrix. A rotation matrix is simply a 3x3 matrix where the 3 rows are respectively:
The forward vector
The up vector
The side vector
For example, the matrix:
[[1 0 0]
[0 1 0]
[0 0 1]]
could be your initial camera orientation at start-up with its forward vector, up vector and side vector pointing towards the positive x axis, y axis and z axis, respectively.
You can now convert these two basis (M1 and M2) to two unit quaternions (Q1 and Q2) using this algorithm which takes care about potential problems like divides by zero.
At this point, you have two unit quaternions representing your initial and final camera orientation. You must now find the quaternion qT that transforms Q1 into Q2, that is:
q2 = qT * q1
q2 * q1^-1 = qT * (q1 * q1^-1) = qT
=> qT = q2 * q1^-1
Knowing that the inverse of a unit quaternion is equal to its conjugate:
q1^-1 = q1* iif ||q1|| = 1
qT = q2 * q1^-1 = q2 * q1*
There is a single step left: extracting the axis and angle from quaternion qT:
angle = 2 * acos(qw)
x = qx / sqrt(1-qw*qw)
y = qy / sqrt(1-qw*qw)
z = qz / sqrt(1-qw*qw)
The angle is, of course, given in radian. Beware of the divide by zero when calculating x, y and z. This situation would happen when there is no rotation or a very small one, so you should test if angle > epsilon where you would choose epsilon to be quite small an angle (say 1/10 of a degree) and not calculate the vector if that is the case.
I am learning XNA and in almost all of the educational kits found on http://creators.xna.com/en-US/. I always see a call to Normalize() on a vector. I understand that normalize basically converts the vector into unit length, so all it gives is direction.
Now my question is When to normalize and what exactly does it help me in. I am doing 2D programming so please explain in 2D concepts and not 3D.
EDIT: Here is code in the XNA kit, so why is the Normalize being called?
if (currentKeyboardState.IsKeyDown(Keys.Left)
|| currentGamePadState.DPad.Left == ButtonState.Pressed)
{
catMovement.X -= 1.0f;
}
if (currentKeyboardState.IsKeyDown(Keys.Right)
|| currentGamePadState.DPad.Right == ButtonState.Pressed)
{
catMovement.X += 1.0f;
}
if (currentKeyboardState.IsKeyDown(Keys.Up)
|| currentGamePadState.DPad.Up == ButtonState.Pressed)
{
catMovement.Y -= 1.0f;
}
if (currentKeyboardState.IsKeyDown(Keys.Down)
|| currentGamePadState.DPad.Down == ButtonState.Pressed)
{
catMovement.Y += 1.0f;
}
float smoothStop = 1;
if (catMovement != Vector2.Zero)
{
catMovement.Normalize();
}
catPosition += catMovement * 10 * smoothStop;
In your example, the keyboard presses give you movement in X or Y, or both. In the case of both X and Y, as when you press right and down at the same time, your movement is diagonal. But where movement just in X or Y alone gives you a vector of length 1, the diagonal vector is longer than one. That is, about 1.4 (the square root of 2).
Without normalizing the movement vector, then diagonal movement would be faster than just X or Y movement. With normalizing, the speed is the same in all 8 directions, which I guess is what the game calls for.
One common use case of vector normalization when you need to move something by a number of units in a direction. For example, if you have a game where an entity A moves towards an entity B at a speed of 5 units/second, you'll get the vector from A to B (which is B - A), you'll normalize it so you only keep the direction toward the entity B from A's viewpoint, and then you'll multiply it by 5 units/second. The resulting vector will be the velocity of A and you can then simply multiply it by the elapsed time to get the displacement by which you can move the object.
It depends on what you're using the vector for, but if you're only using the vectors to give a direction then a number of algorithms and formulas are just simpler if your vectors are of unit length.
For example, angles: the angle theta between two unit vectors u and v is given by the formula cos(theta) = u.v (where . is the dot product). For non-unit vectors, you have to compute and divide out the lengths: cos(theta) = (u.v) / (|u| |v|).
A slightly more complicated example: projection of one vector onto another. If v is a unit vector, then the orthogonal projection of u onto v is given by (u.v) v, while if v is a non-unit vector then the formula is (u.v / v.v) v.
In other words: normalize if you know that all you need is the direction and if you're not certain the vector is a unit vector already. It helps you because you're probably going to end up dividing your direction vectors by their length all the time, so you might as well just do it once when you create the vector and get it over with.
EDIT: I assume that the reason Normalize is being called in your example is so that the direction of velocity can be distinguished from the speed. In the final line of the code, 10 * smoothStop is the speed of the object, which is handy to know. And to recover velocity from speed, you need to multiply by a unit direction vector.
You normalize any vector by dividing each component by the magnitude of the vector, which is given by the square root of the sum of squares of components. Each component then has a magnitude that varies between [-1, 1], and the magnitude is equal to one. This is the definition of a unit vector.
Is there a way given the center point of a mesh object to always have it face another given point? For example if a cylinder is drawn so it looks like a cone always have the tip of the cone face some other given point?
Well thats definitely doable. You need to set up an orthonormal basis such that the 2 objects point each other. I'm going to assume that your cone is set up such that the point is down the Z-Axis (You need to bear this in mind).
Cone A is at position P
Cone B is at position Q
The direction from A to B is = Q - P and B to A is P - Q. Firstly we need to normalise both of these vectors so they are now unit direction vectors. We'll call them A' and B', respectively, for convenience (These are both now direction vectors).
We can assume, for now, that the up vector (We'll call it U) is 0, 1, 0 (Be warned that the maths here will fall over if A' or B' is very close to this up vector but for now I won't worry about that).
So we now need the side vector and true up vector. Fortunately we can calculate something that is perpendicular to the plane formed by the Up vector and A' or B' using a cross product.
Thus the side vector (S) is calculated as follows ... A' x U. Now we have this side vector we can calculate the true Up vector by doing A' x S. This now provides us with the 3 vectors we need for an orthonormal basis. You could normalise thse 2 vectors to remove any errors that may have accumulated but a cross product of 2 unit vectors should always be a unit vector so any errors would be slight so its probably not worth doing.
Using this information we can now build the matrix for Cone A.
S.x, S.y, S.z, 0
U.x, U.y, U.z, 0
A'.x, A'.y, A'.z, 0
P.x, P.y, P.z, 1
Perform the same calculations for both cones and they will now point towards each other. If you move either cone re-calculate as above again for both cones and they will still point towards each other.
Its worth noting that the matrix format I've used is DirectX's default (row major) layout. Its quite possible that C# (and thus XNA?) uses a column major format. If so you need to lay the amtrices out as follows:
S.x, U.x, A'.x, P.x
S.y, U.y, A'.y, P.y
S.z, U.z, A'.z, P.z
0, 0, 0, 1
Note the only difference between the 2 matrices is the fact that the rows and columns have been swapped. This makes the second matrix the transpose of the 1st matrix (And vice versa).