matching Kinect's Skeleton data to a .fbx model in XNA - c#

My question is about a school project that I'm working on.
It involves mapping 3D models of clothing (like a pair of jeans) on a skeleton
that is generated by my Kinect camera.
An example can be found here: http://www.youtube.com/watch?v=p70eDrvy-uM.
I have searched on this subject and found some related threads like:
http://forums.create.msdn.com/forums/t/93396.aspx - this question demonstrates a way using brekel for motion capturing. However, I have to present it in XNA.
I believe that the answer lies in the skeleton data of the 3D model (properly exported as a .FBX file). Is there a way to align or match that skeleton with a skeleton that the Kinect camera generates?
Thanks in advance!
Edit: I am making some progress. I have been playing around with some different models, trying to move some bones upward (very simple use of CreateTranslation with a float that is calculated on the elapsed game time), and it works if I choose the rootbone, but it doesn't work on some bones (like a hand or an arm for example). If I track al the Transform properties of that bone including the X, Y, and Z properties then I can see that something is moving.. However the chosen bones stays in it's place. Anyone have any thoughts perhaps..?

If you are interested, then you'll find a demo here. It has source code for using Real-time Motion capture using the Kinect and XNA.

I have been working on this off and on for a while now. A simple solution I'm using right now is you use the points the nui skeleton tracks to match to the rotations of various joints in a .fbx model. The fbx model will most likely have many more joints then what are tracked and for those you can just iterate a rotation.
The fun part:
The Kinect tracks skeleton joint position in skeleton space -1 to 1 while your models need rotations in model space. Both of them provide child position or rotation in relation with their parent bone in the hierarchy. Also the rotations you need for a .fbx model are around an arbitrary axis.
What you will need is the change from the .fbx model in its bind pose to the pose represented by the kinect data. To get this you can do some vector math to find the rotation of a parent joint around an arbitrary axis and rotate it accordingly then move on down the skeleton chain.
Say you have a shoulder we will call point a and the elbow we can call point b on the bind pose of the fbx model. We will have a point a' and b' on the skeleton.
So for the bind model we will have a vector V from the shoulder to the elbow.
V = b - a
The skeleton model will also have a vector V'
V' = b' - a'
So the axis of rotation for the shoulder will be
Vhat = V cross-product V'
The angle of rotation for the shoulder around Vhat
Theta = ((V dot-product V') / magnitude(V) ) * magnitude(V')
To you will want to rotate the shoulder joint on the fbx model by theta around Vhat.
The model may seem to flop a bit so you may have to use some constraints or use quaternions or other things that help it not look like a dead fish.
Also I'm not sure if XNA has a built in function to rotate around an arbitrary axis. And if someone could double check my math I'd appreciate it, I'm a little rusty.

The Kinect SDK delivers only points of body parts like head postion or right hand position. Seperately you can also read the depth stream from the Kinect sensor.
But currently the Kinect doesn't generate a 3D Model of the body. You would have to do that by yourself.

I eventually settled for the Digital Runes option (with Kinect demo), which was almost perfect apart from a few problems that I wasn't able to solve.
But because I had to hurry for school, we decided to turn the project around completely and our new solution did not involve the Kinect. The Digital Runes examples did help a lot though.

Related

Moving the origin in a 2D Vector space

I'm trying to learn some more about Vectors in a 2D space and how to use them in Gamedevelopment.
I have created a small project for visualising a 'projection' of Vector A onto Vector B in C# using the Monogame framework.
This is all working fine, but now I want to move my origin (which is currently in the top-left) to a custom position. So i can for example draw my lines in the middle of the screen.
I want to do this without any help from the library first to understand what is happening.
But I cant figure out how to do this and if this is actually best practice in Vector spaces or that I should just 'draw' my lines with an offset..
My understanding of Math symbols and functions is not great, so if you provide me with a mathematically answers please explain the symbols aswell.
EDIT:
I created another project for visualising if a point is within a certain angle, but this time i tried to draw everything with an offset (right) next to the original vectors (left).
As you can see it looks fine if i draw it with an offset, but i can't imagine this method being used in Games.. Mainly because everything has a weird offset (duh..) with respect to my mouse, so you would need to implement your own cursor (which games do, but still...)
EDIT2:
Let's make my problem a little bit clearer..
If you look at my second example. Imagine the origin on the right to be an Agent (NPC or Player or whatever) and the segment BC (and BC2) to be it's vision field.
If i want to calculate what is within it's vision, i can do that the same way how i did the example but this 'origin' point would be at (0,0) (top-left) and that is behind the Agent.
I'm probably missing something obvious and thinking way too hard about this..
So i finally found out how this works..
Appearently you work with different spaces or frames instead of moving the origin (also called reference).
A space can live inside another space, but let's keep it simple for now with 2 spaces.
First space is your 'main' space (most of the time called world in Gamedevelopment)
Second space is your 'view' space (or camera)
(i use world and view throughout this answer)
I was doing all my Vector calculations inside my world space. So when drawing these vectors to the screen, they are drawn at the positions with respect to the world's reference (which is the top-left of the screen).
To draw my vectors somewhere else i need to translate them.
Translation is moving vectors along the axis.
This action of 'changing' the position/scale/rotation of a vector is called Transformation.
We can see transformations in a vector space simply as a change from one space to another.
quote
This translation is done by a Translation Matrix (more info in the quote link).
So with the knowledge of these spaces and transformation i fixed my program.
All my vectors are initialized the same way as before, but when i draw my vectors to the screen i translate them according to a pre-defined translation matrix. I call this matrix my viewMatrix because it translates vectors from the world space to the view space.
But there is one thing that needs fixing.
The vector pointA is not defined in the world space, but in the view space.
So that means that when my mouse is on position (20,20), that this position is different from the position (20,20) in my world sapce.
To fix this i need to translate my pointA vector with the invert of the translation matrix. This will convert the vector into a vector inside the world space.
So that's about it..
It took me 2 days to figure this out..
Here is a fixed version of the second example.
Left: my world space
Right: my view space
Notice how my mouse is now properly aligned in my view space instead of in my world space
Here are some resources i collected along the way:
Article - World, View and Projection Transformation Matrices
The True Power of the Matrix (Transformations in Graphics) - Computerphile
RB Whitaker - Basic Matrices
Making a Game Engine: Transformations

Rotating about a Point in a Voxel Game Engine (C#)

I am continuing to build upon a voxel-based game engine made in OpenTK (a .NET/Mono binding of OpenGL). In this engine, there is a basic class called Volume which possesses traits such as position, rotation and scale, as well as rules to edit these values for animation.
How would I go about providing a function to rotate one point about another point?
I could quite easily rotate an object about its center (by changing its rotation property), but what if I need the object to rotate about origin or a random point in space? This would be useful for grouping blocks together, as I could therefore rotate objects as if they were stuck together - rather than them rotating individually.
I heard I would need to dive in at the deep end and learn about rotation matrices, but honestly it went over my head. The closest resource I have been able to find so far was this link, however it details rotating around an axis. Could somebody adapt these instructions: or even better, give me basic pseudocode for a function that rotates from a position and point of rotation?
EDIT:
The following solution doesn't seem to work. My code is as simple as:
void RotateAboutPoint(Vector3 point, Vector3 amount)
{
v.Translate(point);
v.Rotate(amount);
v.Translate(-point);
}
Should this work, and if not, could anyone help further now that the situation is explained properly?
As far as I can tell, this may as well just be:
void RotateAboutPoint(Vector3 point, Vector3 amount)
{
v.Rotate(amount);
}
Which defeats the object of performing this around a point.
These co-ordinates are not in relation to the object... Sorry if my poor explanation made this unclear before!
I answered a similar question here: Rotating around a point different from origin
in the link you provided author put the steps of rotation :
(1) Translate space so that the rotation axis passes through the origin.
(2) Rotate space about the z axis so that the rotation axis lies in the xz plane.
(3) Rotate space about the y axis so that the rotation axis lies along the z axis.
(4) Perform the desired rotation by θ about the z axis.
(5) Apply the inverse of step (3).
(6) Apply the inverse of step (2).
(7) Apply the inverse of step (1).
Actually in this process (2),(3),(5),(6) are unnecessary if you need to rotate about a point. These steps are the case if you need to rotate your object around a line.
In your case : lets say you want to rotate your object around (a,b)
GL.pushmatrix();
translate your object by (a,b);
rotate your object;
translate your object by (-a,-b);
GL.popmatrix();
EDIT:
Sorry I forgot to add encapsulation of your rotation process.(It was on the post I gave the link though)
Further info:
What is this encapsulation? why do we need this? Answer is simple. OpenGL stores a 4x4 matrice which is initially an identity matrice. When you perform a translate or rotate operation, opengl updates your matrice and at the final state opengl multiply each vertice with that matrice. (And if you do not perform any operation, vertices multiply with identity matrice give you the same vertice coordinates)
The problem in your code is when you don't apply an encapsulation to your rotation/translate block, The final matrice will be same for all your objects in the scene. With encapsulation we guaranteed that the updated matrice will be used only inside that block.

multiple matrices in c# xna phone app

I've created the beginnings of a windows phone app. It's a mix of two popular online tutorials
http://rbwhitaker.wikidot.com/simple-3d-animation
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Terrain_from_file.php
The code I've made is here
http://pastebin.com/5VusJpB0
I've added some code to catch the use of the accelerometer but it's all going a bit wrong! The code I've copied from the two examples have both declared world, view and projection matrices. One set for aircraft model in rbwhitakers code and the other set for the terrain from riemers code. I believe the matrices are the problem but I don't quite understand how they work. I only need one camera view so I need to lose a view matrix and it only needs one projection declaration right so I need to lose another projection matrix?. I'm guessing they should both share the same world but have different positions in that world. Can somebody help a noob out and see the problem?!
Thank you.
You are on the right track to solving this. Both the terrain and the model (and any other drawn item) should share the same view and projection matrices. Each item, however, should have it's own world matrix.
The world matrix represents the individual item's position and orientation relative to world space.
Think of the view matrix as the camera's position and orientation in the world. In actuality it is an inverse of that, but can be thought of as that for mind's eye conceptualizing.
The projection matrix is somewhat analogous with a lens attached to the camera (the view matrix) as it modifies the way the world is seen from the camera's perspective.
Just as when viewing a movie you are looking at many actors or props at any given moment (each with it's own position and orientation (world matrix) in the scene), you view them through a single camera at a time (shared view matrix) which is fitted with a single lens at a time(the projection matrix)

determine correct place using homography matrix (AR)

I use surf in emgu cv lib to detect and recgnize my object i need to insert 3d model in the place of this object i have the homography matrix what i want to know is how to get modelview matrix of sharpgl from this homography matrix .i want steps that can result me the correct modelview matrix where i can place the 3d object
any answer will help me
thanks in advance
Take a look at AForge.net. The author of that library did something very similar using glyphs and then inserting his own 3D model in place of the glyph. The library handles the 3d pose of the glyph and applies those to the 3d model. The project can be found here
http://www.aforgenet.com/projects/gratf/
I dont know how you would do this same thing with Open CV and Emgu.
You should simply calibrate your camera using Zhang's method to get camera matrices and then use H decomposition as described in the link you found.
To sum up:
Perform Classical checkboard corner detection (emgucv code here)
Increase corner detection accuracy to subpixel level invoking FindCornerSubPix() function
Finally use CameraCalibration.CalibrateCamera() to calculate the intrinsic camera parameters
Hope this helps

How do you transform a Skinned Mesh during mesh loading and processing?

I created my own skinned mesh loader. It's working fine, but my problem is I don't know how to transform (scale & rotate) the skinned mesh so that the transformations are "baked" onto the vertices. If it were just a geometry, transforming the vertices are a piece of cake, but now that skinning info is involved, if I do a scale for example, my mesh gets all stretched. I know I need to transform my skinning data too, but which parts? All the Bind Pose matrices? The Inverse Bind Pose Matrices? I can't seem to understand how to go about this.
My implementation is in C# & OpenTK and I am specifically loading Skinned Collada files exported from Blender 2.6.
Thanks in advance.
I don't know C# and OpenTK, but I try to help on the theoretical side. Vertices are transformed by weighted global transform matrix. To form a global transform, you need to concatenate local transform of each joints. To create a local transform, you need to concatenate the local translate, rotate and scale. The weight would come from the joint. So I think you need to get joint local rotation/translation/scaling of your bind pose, then manipulate those local matrix and form them to global matrix. After that, you apply weights to the global transformation then transform the vertices.
The following link may be similar to your question.
COLLADA: Inverse bind pose in the wrong space?
I created this collada file player, but use C++.
http://www.youtube.com/watch?v=bXBfVl-msYw

Categories

Resources