Using a Radar Sensor - c#

I have been experimenting with depth sensors using IR but they have been sadly lacking in accuracy (at least for what I want) and range.
So, looking for alternatives.
The application is I am holding a rectangular block of plastic and I am sweeping it from left to right across a surface ( a wall in this case ) and I want to know if the gap between plastic surface and wall surface reaches a certain threshold.
I have a web camera attached to a Microsoft Surface and I am aiming this camera at this user motion.
If I had better quality image I am sure I could use basic geometry to work this out. I looking around for better cameras as I type...
I was considering a radar sensor instead.
I spent many hours yesterday looking for something like a usb radar sensor with a SDK that is friendly for usage with C#.
I have not found anything.
That is not to say I have given up. I am continuing to look.
I just thought I would post here as well for any ones ideas on this?
i will continue to update my questions with hopefully a solution if no one else does.
thanks

Related

c# Smoothing edges while removing background with Kinect V2

I am working on removing the background and leaving only the bodies with Kinect V2 and c#/WPF in real time.
Removing the background works fine, but the edges of the bodies are very rough with Jaggies on the edges.
I need to smooth the edges in real-time (30 frames per second). I would appreciate any advice on that.
I am able to select the edges (similar to Photoshop's magic wand).
I tried to use something like Gaussian blur, but it seems to be too slow for a real-time application. Probably I am missing something because it seems to be a standard problem for many applications like games etc. Thank you!
You probably need to look into implementations of depth image enhancements or smoothing that fill holes around the edges of the silhouettes. For starters; maybe you can look into Kinect Depth Smoothing.This should work in real time since its just based on calculating modes. For more accurate implementation, there are research papers that address the same issue such as the ones below:
[a] Chen, L., Lin, H.., Li, S., “Depth image enhancement for Kinect using region growing and bilateral filter,” Pattern Recognition (ICPR), 2012 21st International Conference on, 3070–3073 (2012).
[b] Le, A. V., Jung, S.-W.., Won, C. S., “Directional joint bilateral filter for depth images,” Sensors 14(7), 11362–11378, Multidisciplinary Digital Publishing Institute (2014).

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

fingertip detection from depth Image using kinect (c# and wpf)

I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!

Improve face detection performances with OpenCV/EmguCV

I am currently using EmguCV (OpenCV C# wrapper) sucessfully to detect faces in real-time (webcam). I get around 7 FPS.
Now I'm looking to improve the performances (and save CPU cycles), and I'm looking for options, here are my ideas:
Detect the face, pick up features of the face and try to find those features in the next frames (using SURF algorithm), so this becomes a "face detection + tracking". If not found, use face detection again.
Detect the face, in the next frame, try to detect the face in a ROI where the face previously was (i.e. look for the face in a smaller part of the image). If the face is not found, try looking for it in the whole image again.
Side idea: if no face detected for 2-3 frames, and no movement in the image, don't try to detect anymore faces until movement is detected.
Do you have any suggestions for me ?
Thanks.
All the solutions you introduced seem to be smart and reasonable. However, if you use Haar for face detection you might try to create a cascade with less stages. Although 20 stages are recommended for face detection, 10-15 might be enough. That would noticeably improve performance. Information on creating own cascades can be found at Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features).
Again, using SURF is a good idea. You can also try P-N learning: Bootstrapping binary classifiers by structural constraints. There are interesting videos on YouTube presenting this method, try to find them.
For the SURF algorithm, you could try, but i am not sure that it provides relevant features on a face, maybe around the eyes, or if you are close and have skin irregularities, or again maybe in the hair if the resolution is enough. Moreover, SURF is not really really fast, and i would just avoiding doing more calculous if you want to save CPU time.
The roi is a good idea, you would choose it by doing a camshift algorithm, it won't save a lot of CPU, but you could try as camshift is a very lightweight algorithm. Again i am not sure it will be really relevant, but you got the good idea in your second post : minimize the zone where to search...
The side idea seems quite good to me, you could try to detect motion (global motion for instance), if there's not so much, then don't try to detect again what you already detected ... You could try doing that with motion templates as you know the silouhette from meanshift or face detection...
A very simple, lightweight but un-robust template matching with the frame n-1 and frame n could give you aswell a coefficient that measures a sort of similarity between these two frames, you can say that below a certain threshold you activate face detection.... why not ? It should take 5min to implement if the C# wrapper has the matchTemplate() equivalent function...
I'll come back here if i have better (deeper) ideas, but for now, i've just come back from work and it's hard to think more...
Julien,
This is not a perfect answer, but just a suggestion.
In my digital image processing classes in my last semester of B.Tech in CS, i learned about bit place slicing, and how the image with just its MSB plane information gives almost 70% of the useful image information. So, you'll be working with almost the original image but with just one-eighth the size of the original.
So although i haven't implemented it in my own project, i was wondering about it, to speed up face detection. Because later on, eye detection, pupil and eye corner detection also take up a lot of computation time and make the whole program slow.

Detecting people crossing a line with OpenCV

I want to count number of people crossing a line from either side. I have a camera that is placed on ceiling and shooting for the floor where the line is (So camera sees just top of people heads; and so it is more of object detection than people detection).
Is there any sample solution for this problem or similar problems like this? So I can learn from them?
Edit 1: More than one person is crossing the line at any moment.
If nothing else but humans are subject to cross the line then you need not to detect people you only have to detect motion.
There are several approaches for motoin detection.
Probably the simplest one fits your goals. You simply calculate difference between successive frames of video stream and this way determine "motion mask" and thus detect line crossing event
As an improvement of this "algorithm" you may consider "running average" method.
To determine a direction of motion you can use "motion templates".
In order to increase accuracy of your detector you may try any background subtraction technique (which in turn is not a simple solution). For example, if there is some moving background which should be filtered out (e.g. using statistical learning)
All algorithms mentioned are included in OpenCV library.
UPD:
how to compute motion mask
Useful functions for determining motion direction cvCalcMotionGradient, cvSegmentMotion, cvUpdateMotionHistory (search docs). OpenCV library contains example code for motion analysis, see motempl.c
advanced background subtraction from "Learning OpenCV" book
I'm not an expert in video-based cv, but if you can reduce the problem into a finite set of images (for instance, entering frame, standing on line, exiting frame), then you can use one of many shape recognition algorithms. I know of Shape Context which is good, but I doubt if it subtle enough for this application (it won't tell the difference between a head and most other round objects).
Basically, try to extract key images from the video, and then test them with shape recognition algorithms.
P.S. Finding the key images might be possible with good motion detection methods.

Categories

Resources