Point tracking Emgu cv? - c#

how do i track points in a video in real time like in the below video.
https://www.youtube.com/watch?v=jg6Nz6BfoSQ
I managed to use optical flow method to get this output to my video,
But i couldn't find a way to point track with Emgu Cv. Can someone suggest what should I do?
In youTube video he used c++ as the language.Does the language type affect to the real time response of the system?

You can get the floor coordinates (http://msdn.microsoft.com/en-us/library/hh973078.aspx) there is an API function. You can also get raw depth stream data and determine the 3D coordinates of a point. To make 3D point cloud it is a litle expensive usualy done on GPU for real time applications. The problem is that you have to track objects. For tracking you can check OpenCV and to combine it with kinect raw data.

Related

Structure from Motion using Emgu

I have been trying to get my mind around how to do structure from motion using Emgu (C# openCV wrapper) and images extracted from a monocular video, but I find most of the papers on the subject to be too theoretical and written in a mathematical language I am not familiar with.
I have been able to track 2D feature points pretty accurately, although I still got some outliers, and now need to find the camera location and angle given array pairs of the 2D feature points (current and next frame). I understand I probably need to use some kind of RANSAC algorithm to get the best match from a set of these feature points.
How would one proceed to solve this using Emgu and C#?

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

Localization of a robot using Kinect and EMGU(OpenCV wrapper)

I'm working on small WPF desktop app to track a robot. I have a Kinect for Windows on my desk and I was able to do the basic features and run the Depth camera stream and the RGB camera stream.
What I need is to track a robot on the floor but I have no idea where to start. I found out that I should use EMGU (OpenCV wrapper)
What I want to do is track a robot and find it's location using the depth camera. Basically, it's for localization of the robot using Stereo Triangulation. Then using TCP and Wifi to send the robot some commands to move him from one place to an other using both the RGB and Depth camera. The RGB camera will also be used to map the object in the area so that the robot can take the best path and avoid the objects.
The problem is that I have never worked with Computer Vision before and it's actually my first, I'm not stuck to a deadline and I'm more than willing to learn all the related stuff to finish this project.
I'm looking for details, explanation, hints, links or tutorials to achieve my need.
Thanks.
Robot localization is a very tricky problem and I myself have been struggling for months now, I can tell you what I have achieved But you have a number of options:
Optical Flow Based Odometery: (Also known as visual odometry):
Extract keypoints from one image or features (I used Shi-Tomashi, or cvGoodFeaturesToTrack)
Do the same for a consecutive image
Match these features (I used Lucas-Kanade)
Extract depth information from Kinect
Calculate transformation between two 3D point clouds.
What the above algorithm is doing is it is trying to estimate the camera motion between two frames, which will tell you the position of the robot.
Monte Carlo Localization: This is rather simpler, but you should also use wheel odometery with it.
Check this paper out for a c# based approach.
The method above uses probabalistic models to determine the robot's location.
The sad part is even though libraries exist in C++ to do what you need very easily, wrapping them for C# is a herculean task. If you however can code a wrapper, then 90% of your work is done, the key libraries to use are PCL and MRPT.
The last option (Which by far is the easiest, but the most inaccurate) is to use KinectFusion built in to the Kinect SDK 1.7. But my experiences with it for robot localization have been very bad.
You must read Slam for Dummies, it will make things about Monte Carlo Localization very clear.
The hard reality is, that this is very tricky and you will most probably end up doing it yourself. I hope you dive into this vast topic, and would learn awesome stuff.
For further information, or wrappers that I have written. Just comment below... :-)
Best
Not sure if is would help you or not...but I put together a Python module that might help.
http://letsmakerobots.com/node/38883#comments

fingertip detection from depth Image using kinect (c# and wpf)

I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!

How to draw an unfilled square on top of a stream video using a mouse and track the object enclosed in the square automatically in C#?

I am making an object tracking application. I have used Emgucv 2.1.0.0
to load a video file
to a picturebox. I have also taken the video stream from a web camera.
Now, I want
to draw an unfilled square on the video stream using a mouse and then track the object enclosed
by the unfilled square as the video continues to stream.
This is what people have suggested so far:-
(1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester
said that there are .NET wrappers, but I had a hard time finding any.
(2) DxLogo sample
DxLogo – A sample application showing how to superimpose a logo on a data stream.
It uses a capture device for the video source, and outputs the result to a file.
Sadly, this does not use a mouse.
(3) GDI+ and mouse handling - this area I do not have a clue.
And for tracking the object in the square, I would appreciate if someone give me some research paper links to read.
Any help as to using the mouse to draw on a video is greatly appreciated.
Thank you for taking the time to read this.
Many Thanks
It sounds like you want to do image detection and / or tracking.
The EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) library provides a good foundation for this sort of thing in .Net.
e.g. http://www.emgu.com/wiki/index.php/Tutorial#Examples
It's a pretty meaty subject with quite a few years and different branches of research associated with it so I'm not sure anyone can give the definitive guide to such things but reading up neural networks and related topics would give you a pretty good grounding in the way EmguCV and related libraries manage it.
It should be noted that systems such as EmguCV are designed to recognise predefined items within a scene (such as a licence plate number) rather than an arbitory feature within a scene.
For arbitory tracking of a given feature, a search for research papers on edge detection and the like (in combination with a library such a EmguCV) is probably a good start.
(You also may want to sneak a peek at an existing application such as http://www.pfhoe.com/ to see if it fits your needs)

Categories

Resources