I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!
Related
I have been experimenting with depth sensors using IR but they have been sadly lacking in accuracy (at least for what I want) and range.
So, looking for alternatives.
The application is I am holding a rectangular block of plastic and I am sweeping it from left to right across a surface ( a wall in this case ) and I want to know if the gap between plastic surface and wall surface reaches a certain threshold.
I have a web camera attached to a Microsoft Surface and I am aiming this camera at this user motion.
If I had better quality image I am sure I could use basic geometry to work this out. I looking around for better cameras as I type...
I was considering a radar sensor instead.
I spent many hours yesterday looking for something like a usb radar sensor with a SDK that is friendly for usage with C#.
I have not found anything.
That is not to say I have given up. I am continuing to look.
I just thought I would post here as well for any ones ideas on this?
i will continue to update my questions with hopefully a solution if no one else does.
thanks
I am a newbie of programming. I am using emgucv for my project and now I am going to detect the movement of an edge. I may want to use this vector for further operation. However, I only find a lot of documentation on tracking the movement of features points. How can I do it on an edge?
Thx a lot.
I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.
I am implementing an eye tracker using emgucv (openCV C# wrapper), So far i was able to detect iris center and eye corner accurately.
As the next step i want to get the screen coordinate where user is focusing (also known as gaze point),As a beginner to image processing , i am completely unaware of gaze mapping and gaze estimation.
I would be thankful , if you provide any code snippets or algoritms to perform gaze mapping to retrieve gaze coordinate on screen .
Thanks in advance
If you want to research the field in more depth, there's this interesting piece of research called "EyeTab: Model-based gaze estimation on unmodified tablet computers". It might have some of the information you want, or at least help you understand the field more.
http://www.cl.cam.ac.uk/research/rainbow/projects/eyetab/
(also GitHub)
I am making an object tracking application. I have used Emgucv 2.1.0.0
to load a video file
to a picturebox. I have also taken the video stream from a web camera.
Now, I want
to draw an unfilled square on the video stream using a mouse and then track the object enclosed
by the unfilled square as the video continues to stream.
This is what people have suggested so far:-
(1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester
said that there are .NET wrappers, but I had a hard time finding any.
(2) DxLogo sample
DxLogo – A sample application showing how to superimpose a logo on a data stream.
It uses a capture device for the video source, and outputs the result to a file.
Sadly, this does not use a mouse.
(3) GDI+ and mouse handling - this area I do not have a clue.
And for tracking the object in the square, I would appreciate if someone give me some research paper links to read.
Any help as to using the mouse to draw on a video is greatly appreciated.
Thank you for taking the time to read this.
Many Thanks
It sounds like you want to do image detection and / or tracking.
The EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) library provides a good foundation for this sort of thing in .Net.
e.g. http://www.emgu.com/wiki/index.php/Tutorial#Examples
It's a pretty meaty subject with quite a few years and different branches of research associated with it so I'm not sure anyone can give the definitive guide to such things but reading up neural networks and related topics would give you a pretty good grounding in the way EmguCV and related libraries manage it.
It should be noted that systems such as EmguCV are designed to recognise predefined items within a scene (such as a licence plate number) rather than an arbitory feature within a scene.
For arbitory tracking of a given feature, a search for research papers on edge detection and the like (in combination with a library such a EmguCV) is probably a good start.
(You also may want to sneak a peek at an existing application such as http://www.pfhoe.com/ to see if it fits your needs)