Kinect wrist/hand joints position changing depending on palm size - c#

I'm creating Kinect mouse aplication. The idea is to hand/wrist kinect joint, as a source for cursor position, and finger detection to perform clicks, holds etc.
I got finger detection and palm gesture recognition working and here I found my blocker:
position of wrist/hand joint is changing when I make palm gesture, for instance when i change from open palm to fist.
Is there any workaround for this issue?
I'm using Kinect SDK 1.5 and EmguCV in this wpf aplication

Thanks Jerdak for your suggestion.
I modified it a bit and the result is pretty nice in my opinion.
I'm calculating vector between elbow and wrist position, normalizing it and then multiplying by fixed arm length.
Then I'm just adding vector to elbow position.
Edit:
After bit more testing this approach works almost perfect, only trouble is that elbow joint can "bounce" too...

Related

Pinch and rotate around a point using MRTK and Hololens 1

I'm trying to rotate a beam/cuboid around a pivot using MRTK, Unity, and the Hololens 1 when you're doing the pinch and hold gesture. The beam should remain in place once you've let go of the pinch.
My initial thoughts were to get the cartesian coordinates of the pinch and based on their position relative to the pivot, have the beam rotate by however many degrees needed. E.g. the hand position while pinching is (1,1,0), and the pivot position is (0,0,0). Thus, the beam should be rotated at 45 deg in the XY plane (we ignore the z components). I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2. (https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/HandTracking.html#hand-tracking-events & https://microsoft.github.io/MixedRealityToolkit-Unity/api/Microsoft.MixedReality.Toolkit.Input.IMixedRealityHand.html#Microsoft_MixedReality_Toolkit_Input_IMixedRealityHand_TryGetJoint_).
Does anyone know how to go about doing this or at least point me in the right direction (tutorials/code/assets would be much appreciated!)
Thank you!
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2
Yes, HoloLens1 does not support hand tracking, such as touching holograms directly with your hands or pointing and committing with hands. It is recommended that you try to use the interaction model Gaze and commit, so that you can easily get the position of GGVPointer.
Pinch to rotate interaction can be achieved by adding the ManipulationHandler component from MRTK to your cube. The component can be configured to allow two hand manipulation like this.
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2.
There are a few ways to query pointer position. The code below should return the Right GGVPointer position for the hololens.
Vector3 pos;
GGVPointer pointer = PointerUtils.GetPointer<GGVPointer>(Handedness.Right);
if(pointer != null)
{
pos = pointer.Position;
}
In case you just want to rotate the Object around its center,you can use the Boundingbox Component. It creates handles that can be pinched and moved to rotate an object. you can disable the axis you don't want. It works even on the HoloLens 1.

How to do full body detection and tracking in a rectangular frame using c# and kinect

I am working on a project using c# and Kinect in which I want to detect the entire body and then track its motion in the video using a rectangular frame (just like we have mobile cameras focusing our faces in a rectangular frame and tracking our motion). Can anyone help me with some coding hint?
For the quick start just use these quick start series it contains all what you need to get a start. You don't need a separate rectangle. when the Kinect tracks it's skeleton you can put a separate rectangle using the left most and right most joint positions to get the rectangle's two verticle ends and top most and bottom most Y positions to horizontal limits.

Recognize if an arm is swinging/moving towards Kinect sensor or away from it

I am trying to figure out how to recognize if a person's arm is swinging/moving towards or away from the Kinect. I'm thinking it is much like a hit or punch towards the sensor.
The depth changes as the arm is going toward or away from the sensor, but how can this gesture be recognized?
I'm using the Kinect for Windows (older version) and SDK 1.8. I have also looked at EMGU (C# wrapper for OpenCV).
Any help answering this question would be greatly appriciated.
You can check and use Tracking Users with Kinect Skeletal Tracking and Channel 9s tutorials.
1 . Start with a base position of the user.
2 . Save the positions of the arm joints (e.g. schoulder left, elbow left, wrist left and hand left).
3 . The saved positions of step 2 are your reference points. Use these to calculate a swing move (e.g. (handLeftNew.z-value < handLeftReference.z-value), so movement towards Kinect).
Code sample
// get the joint
Joint leftHand = skeleton.Joints[JointType.HandLeft];
// get the individual points of the left hand
double lefttX = leftHand.Position.X;
double leftY = leftHand.Position.Y;
double leftZ = leftHand.Position.Z;

Casting a ray from mouse through distortion matrix

I've searched the board, as well as the oculus board, and unity board. Couldn't really find something that helped.
I'm working on a vehicle simulation. Before we started using the oculus, it was just a regular first person perspective. You used a racing wheel/pedals to drive and the mouse to control all the buttons and switches etc. We use raycasting from the mouse point on the screen into the world to interact with the various controls in the vehicle.
Now that we're using the oculus, the raycast isn't taking into account the distortion matrix used on the oculus cameras. So you're not actually casting a ray at what you're visually clicking on. Using Debug.DrawRay I found that it was slightly off. Just to be sure, I disabled the lens correction via inspector on the OVRCameraController and sure enough the raycasting was working again.
The ray itself is calculated the usual way one does when firing from the mouse point:
ScreenPointToRay(Input.mousePosition);
Would anyone have any idea how I can adjust my ray so it works with lens correction on?
Cheers,
Gordon
Simply multiply the Distortion-Matrix with the Ray's Vectors (Position and Normal) and you have your new Ray. I would suggest using Homogeneous Coordinates with 4x4 Matrix and Vec4's where Positions has component w = 1.0 and Normals have w = 0.0; This way you can simply multiply and you are done - depending on lookup direction you might have to use the inverse matrix :)
Alright, what I ended up doing was creating a 3D cursor, bypassing the distortion matrix entirely.
I placed a gameobject at the same place as the "head" (between left eye and right eye cameras). It has a script on it that rotates up/down/left/right based on mouse movement. I then temporarily put a spot light with a narrow cone and high intensity on it so it looked like a laser pointer. I figured if the light is hitting things, so should a raycast of the same origin. Which ended up working.
However this didn't really solve the issue of using a cursor. I tried a number of things that ultimately didn't work (didn't line up with with where the light/raycast hit).
Finally I realized I was overlooking something very simple. I lowered the near clipping plane of the cameras and placed a plane as close as I could to the camera while still being visible. I then rotated it on local y by 180 so it would be invisible to the cameras and not block ray casts.
I then added some code so that when a raycast hit something, it would fire a second raycast from the hit point back to the origin. On the way it would have to hit the plane, which was essentially at the near clipping plane. I would then move my 3D cursor to that hit point.
Now it works as intended. Where the cursor is, is where the original raycast hit. The cursor now matched the position of the laser dot. So then I removed the light component. Done.
Hope this helps someone else someday.

matching Kinect's Skeleton data to a .fbx model in XNA

My question is about a school project that I'm working on.
It involves mapping 3D models of clothing (like a pair of jeans) on a skeleton
that is generated by my Kinect camera.
An example can be found here: http://www.youtube.com/watch?v=p70eDrvy-uM.
I have searched on this subject and found some related threads like:
http://forums.create.msdn.com/forums/t/93396.aspx - this question demonstrates a way using brekel for motion capturing. However, I have to present it in XNA.
I believe that the answer lies in the skeleton data of the 3D model (properly exported as a .FBX file). Is there a way to align or match that skeleton with a skeleton that the Kinect camera generates?
Thanks in advance!
Edit: I am making some progress. I have been playing around with some different models, trying to move some bones upward (very simple use of CreateTranslation with a float that is calculated on the elapsed game time), and it works if I choose the rootbone, but it doesn't work on some bones (like a hand or an arm for example). If I track al the Transform properties of that bone including the X, Y, and Z properties then I can see that something is moving.. However the chosen bones stays in it's place. Anyone have any thoughts perhaps..?
If you are interested, then you'll find a demo here. It has source code for using Real-time Motion capture using the Kinect and XNA.
I have been working on this off and on for a while now. A simple solution I'm using right now is you use the points the nui skeleton tracks to match to the rotations of various joints in a .fbx model. The fbx model will most likely have many more joints then what are tracked and for those you can just iterate a rotation.
The fun part:
The Kinect tracks skeleton joint position in skeleton space -1 to 1 while your models need rotations in model space. Both of them provide child position or rotation in relation with their parent bone in the hierarchy. Also the rotations you need for a .fbx model are around an arbitrary axis.
What you will need is the change from the .fbx model in its bind pose to the pose represented by the kinect data. To get this you can do some vector math to find the rotation of a parent joint around an arbitrary axis and rotate it accordingly then move on down the skeleton chain.
Say you have a shoulder we will call point a and the elbow we can call point b on the bind pose of the fbx model. We will have a point a' and b' on the skeleton.
So for the bind model we will have a vector V from the shoulder to the elbow.
V = b - a
The skeleton model will also have a vector V'
V' = b' - a'
So the axis of rotation for the shoulder will be
Vhat = V cross-product V'
The angle of rotation for the shoulder around Vhat
Theta = ((V dot-product V') / magnitude(V) ) * magnitude(V')
To you will want to rotate the shoulder joint on the fbx model by theta around Vhat.
The model may seem to flop a bit so you may have to use some constraints or use quaternions or other things that help it not look like a dead fish.
Also I'm not sure if XNA has a built in function to rotate around an arbitrary axis. And if someone could double check my math I'd appreciate it, I'm a little rusty.
The Kinect SDK delivers only points of body parts like head postion or right hand position. Seperately you can also read the depth stream from the Kinect sensor.
But currently the Kinect doesn't generate a 3D Model of the body. You would have to do that by yourself.
I eventually settled for the Digital Runes option (with Kinect demo), which was almost perfect apart from a few problems that I wasn't able to solve.
But because I had to hurry for school, we decided to turn the project around completely and our new solution did not involve the Kinect. The Digital Runes examples did help a lot though.

Categories

Resources