How to detect left and right side of face? - c#

I am using opencv sharp, and haar cascaes for profile face, but i wanna know the side of the face detected. Is that possible?

you could determine the position of the eyes and then check, which eye is on your side. the symetric axis of the face is at the middle of the distance between your eyes.
as far as i know, opencv has some built in functionality to find eyes within the face - or at least some libraries using opencv do (i.e. emgu)

Related

How to get the movement of a edge by Emgucv

I am a newbie of programming. I am using emgucv for my project and now I am going to detect the movement of an edge. I may want to use this vector for further operation. However, I only find a lot of documentation on tracking the movement of features points. How can I do it on an edge?
Thx a lot.

Recognizing Sign Language in 3D Kinect

We're making a sign language translator using kinect 1.0 device for my undergrad final year project.
So far we have achieved recognizing gestures in 2D using the skeleton api's in kinect sdk and applied DTW algorithm on it.
We also tracked fingers and distinguished between how many fingers are shown in the frame using contouring and applying convex hull on the contour. We used C# and Emgucv to achieve this.
Now we're stuck at how to transform the data into 3d coordniates. What I don't get is that:
How the 3d visualization will look like? I mean for now we just use the depth stream and apply a skin classifier on it to show only the skin parts as white pixels and the rest of the objects as black pixels, and we show the contoured and convex hulled area in the color stream. For 3d we'll use the same depth and color stream? If yes then how we'll transform the data and coordinates into 3d?
For gestures that involve touching of fingers on nose, how will I isolate the contoured area not to include all of the face and just to tell which finger touches which side of nose? Is this where 3d will come in?
Which api's and libraries are there that can help us in c#?
Extracted Fingers after Contouring and Convex Hull
Kinect has support for creating a depth map using infrared lasers. It projects an infrared grid and measures the distance for each point in the grid. It seems that you're already using the depth info by this grid.
For converting to 3D you should indeed use the depth info. Some basic trigonometry will help to transform the depth map into 3D (x,y,z) coordinates. The color stream from the camera can be mapped onto these points.
Detecting whether fingers are touching a nose is a difficult issue. While the grid density of the kinect is not very high, 3D probably won't help you. I would suggest to use edge detection (e.g. canny algorithm) with contour recognition on the camera images to detect whether a finger is in front of the face. Testing whether a finger actually touches the nose or is just close to is the real challenge.

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

fingertip detection from depth Image using kinect (c# and wpf)

I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!

Detect Quadrilateral points in high contrast image

I need to detect points of quadrilateral in a pretty high contrast image. I understand how I can detect large changes in contrast between 2 pixels, but I'm wondering what would be the best way to detect entire boundaries and corners of a quad in an image.
So I'm basically looking for a good article/algorithm which explains/does this. Note I've seen articles which detect edges but don't actually turn these into vector-based lines. It's the corner points I'm really after! :)
The Hough Transform is a very useful algorithm for your task. Here are a few links: 1) wikipedia, 2) more detailed with examples -- but on solid shapes, 3) an example using points.
Have a look at AForge- it's got great computer vision capabilities that you can build on, and it's open source to boot, so even if it doesn't do what you want out of the box, you can get some ideas.
Use Corner detection techniques, like Harris's or SUSAN. OpenCV could help you.

Categories

Resources