How to get the movement of a edge by Emgucv - c#

I am a newbie of programming. I am using emgucv for my project and now I am going to detect the movement of an edge. I may want to use this vector for further operation. However, I only find a lot of documentation on tracking the movement of features points. How can I do it on an edge?
Thx a lot.

Related

Using a Radar Sensor

I have been experimenting with depth sensors using IR but they have been sadly lacking in accuracy (at least for what I want) and range.
So, looking for alternatives.
The application is I am holding a rectangular block of plastic and I am sweeping it from left to right across a surface ( a wall in this case ) and I want to know if the gap between plastic surface and wall surface reaches a certain threshold.
I have a web camera attached to a Microsoft Surface and I am aiming this camera at this user motion.
If I had better quality image I am sure I could use basic geometry to work this out. I looking around for better cameras as I type...
I was considering a radar sensor instead.
I spent many hours yesterday looking for something like a usb radar sensor with a SDK that is friendly for usage with C#.
I have not found anything.
That is not to say I have given up. I am continuing to look.
I just thought I would post here as well for any ones ideas on this?
i will continue to update my questions with hopefully a solution if no one else does.
thanks

Best approach for 2D gesture recognition in VR?

I'm the developer on a game which uses gesture recognition with the HTC Vive roomscale VR headset, and I'm trying to improve the accuracy of our gesture recognition.
(The game, for context: http://store.steampowered.com//app/488760 . It's a game where you cast spells by drawing symbols in the air.)
Currently I'm using the 1 dollar algorithm for 2D gesture recognition, and using an orthographic camera tied to the player's horizontal rotation to flatten the gesture the player draws in space.
However, I'm sure there must be better approaches to the problem!
I have to represent the gestures in 2D in instructions, so ideally I'd like to:
Find the optimal vector on which to flatten the gesture.
Flatten it into 2D space.
Use the best gesture recognition algorithm to recognise what gesture it is.
It would be really good to get close to 100% accuracy under all circumstances. Currently, for example, the game tends to get confused when players try to draw a circle in the heat of battle, and it assumes they're drawing a Z shape instead.
All suggestions welcomed. Thanks in advance.
Believe me or not, but I found this post two months ago and decided to test my VR/AI skills by preparing a Unity package intended for recognising magic gestures in VR. Now, I'm back with a complete VR demo: https://ravingbots.itch.io/vr-magic-gestures-ai
The recognition system tracks a gesture vector and then projects it onto a 2D grid. You can also setup a 3D grid very easily if you want the system to work with 3D shapes, but don't forget to provide a proper training set capturing a large number of shape variations.
Of course, the package is universal, and you can use it for a non-magical application as well. The code is well documented. Online documentation rendered to a PDF has 1000+ pages: https://files.ravingbots.com/docs/vr-magic-gestures-ai/
The package was tested with HTC Vive. Support for Gear VR and other VR devices is progressively added.
Seems to me this plugin called Gesture Recognizer 3.0 could give you a great insight on what step you should take
Gesture Recognizer 3.0
Also, I found this javascript gesture recognition lib in github
Jester
Hope it helps.
Personally I recommend AirSig
It covers more features like authentication using controllers.
The Vive version and Oculus version are free.
"It would be really good to get close to 100% accuracy under all circumstances." My experience is its built-in gestures is over 90% accuracy, signature part is over 96%. Hope it fits your requirement.

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

fingertip detection from depth Image using kinect (c# and wpf)

I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!

How to detect left and right side of face?

I am using opencv sharp, and haar cascaes for profile face, but i wanna know the side of the face detected. Is that possible?
you could determine the position of the eyes and then check, which eye is on your side. the symetric axis of the face is at the middle of the distance between your eyes.
as far as i know, opencv has some built in functionality to find eyes within the face - or at least some libraries using opencv do (i.e. emgu)

Categories

Resources