Kinect front light cause grip detection errors - c#

I'm writing an application that use kinect for windows (not the new full hd kinect) with sdk 1.8. I'm using kinect region and kinectScrollviewer. In front of the kinect camera there's a window, so the subject who must be tracked has the sun light source that comes from behind. When the light is intense, kinect loose the ability to detect correctly the grab and release interaction (sometimes the events of grab or release are fired randomly). My application will run in environments where this kind of lights conditions will be possible. My question is, there's some parameters in skeletal stream or something else that i can set to change the sensibility to this kind of light?

The Kinect sensor works using infrared light, which the IR rays in the sun will disturb (you can see this in the depth map when you have direct sunlight, see this link: http://dasl.mem.drexel.edu/wiki/index.php/KINECT_in_direct_sunlight), so unfortunately there isn't much you can do about it except trying to get rid of the sunlight.
Here is another video of how the depth sensing works: http://www.youtube.com/watch?v=uq9SEJxZiUg

Related

How to control exposure or brightness of webcam in UWP using MediaCapture.VideoDeviceController for QR-Code scanning

I'm working on a UWP app that needs to scan QR codes from a laptop webcam. I'm using the Windows.Media.Capture.MediaCapture class for this. Everything works well, except for when using qr code on a smartphone with it's brightness set too high for the limited dynamic range of built-in webcams. The auto-exposure of the webcam is active, but the screen can still be too bright compared to the environment.
I'm looking for a way to control or override the brightness or exposure either manually or by using some kind of exposure compensation mode.
The only properties to do with brightness/exposure that are enabled/working on my regular built-in webcam are Brightness and Contrast, and those change the image accordingly, but look like they are post-processing effects. They don't change the exposure of the camera itself, thus not fixing the issue.
mediaCapture.VideoDeviceController.ExposureCompensationControl.Supported;
mediaCapture.VideoDeviceController.ExposureControl.Supported;
mediaCapture.VideoDeviceController.ExposurePriorityVideoControl.Supported;
mediaCapture.VideoDeviceController.Exposure.Capabilities.Supported;
all return false
mediaCapture.VideoDeviceController.Brightness.TrySetValue(10);
changes the image, but highlights are still washed out and have no detail for the scanner to pickup
With respect to programmatically controlling the camera exposure via a windows driver, you are considering the correct interface. Using the MS Surface Pro 4, I have successfully modified the exposure using this interface:
mediaCapture.VideoDeviceController.ExposureControl
As well, MS has provided some nice examples and documentation for how to get this to work. Keep in mind, the examples (and the MS camera app) will hide the controls if the exposure feature is not supported by your HW.
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/capture-device-controls-for-photo-and-video-capture
https://github.com/microsoft/Windows-universal-samples/tree/master/Samples/CameraManualControls
The lack of access to imaging controls (such as exposure) really has nothing to do with quality. It has more to do with completeness of the camera solution. Camera sensors have a control interface (for example i2c) which is separate from the data interface which drives image. Most third party camera modules will not implement the HW/SW required to enable these controls.
The Supported property of the VideoDeviceController's control objects (ExposureCompensationControl.Supported, for example) doesn't always give accurate information unless the camera is active. So, be sure to get the preview or frame capture started before asking whether the camera controls are supported.
From VideoDeviceController
Some drivers may require that the camera device preview to be in a
running state before it can determine which controls are supported by
the VideoDeviceController. If you check whether a certain control is
supported by the VideoDeviceController before the preview stream is
running, the control may be described as unsupported even though it is
supported by the video device.

Hand Detection using C#

I am working on a project in Unity-Android that requires Gesture Recognition properly and efficiently (as it will be running in a mobile) for further processes.
Currently, I am using HSV to detect the hand region which is not that accurate even when a touch-input is taken from the user.
Please refer to me a solution that is efficiently detecting hand region. It should be robust as the user will be opening/closing any number of fingers or rotating the hand.
Note:
As I'm working on Unity, I need help in C# only

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

Retrieve gaze coordinate on screen

I am implementing an eye tracker using emgucv (openCV C# wrapper), So far i was able to detect iris center and eye corner accurately.
As the next step i want to get the screen coordinate where user is focusing (also known as gaze point),As a beginner to image processing , i am completely unaware of gaze mapping and gaze estimation.
I would be thankful , if you provide any code snippets or algoritms to perform gaze mapping to retrieve gaze coordinate on screen .
Thanks in advance
If you want to research the field in more depth, there's this interesting piece of research called "EyeTab: Model-based gaze estimation on unmodified tablet computers". It might have some of the information you want, or at least help you understand the field more.
http://www.cl.cam.ac.uk/research/rainbow/projects/eyetab/
(also GitHub)

C# Capturing Direct 3D Screen

I have been fooling around with screen capture for awhile now and I managed to capture the entire screen, certain areas on the screen etc...
But when I go into a game and try to capture the screen, it completely ignores the game and instead, captures the desktop (or whatever is behind the game window).
Another interesting fact is that the same thing happens with the PrtScn button.
Any ideas on how to capture a game's screen?
The screen capturing technique you are using works well for capturing things that aren't hardware accelerated. I suspect you'd have the same problem trying to capture a movie frame in Windows Media Player.
The solution is the do a screen capture from the hardware itself using DirectX. This article explains how to do that with some code and a managed wrapper around DirectX called SlimDX.
EDIT
If Slim DX doesn't work for you, then you'd just have to find another managed wrapper around Direct X. I don't think you are going to be able to do the screen capture without working at the hardware level, and DirectX is the means of doing that on the Windows platform.

Categories

Resources