Kinect "sleep" command - c#

I'm trying to write a C# hand and fingertip detection program, for now I have been able to get the hand points and store them in a List but I'm a little stuck regarding how to present that data in order to visualise the results.
My solution at the moment is to draw a black point in a canvas (I'm trying to use Ellipse shape for this) for each point I have, but I think that this is so time consuming that I can't see the results.
Is there a way to make the Kinect ignore the next, for exemple, 30 frames, or in other words can I make kinect only call the onFrameReadyEvent once every 30 Frames?
If anyone has any other solution for result presentation feel free to share ;)
Thanks in advance.

Since the OnFrameReadyEvent is an event, look into Reactive Extensions.
Rx has a Throttle extension method that you can use to only get 1 frame a second. For an example, check out this SO question:
How to throttle event stream using RX?

Related

EndDraw() takes 80 % of working time in Direct2D

I face one critical Performance problem in my Direct2D application. I make use of Direct2D to draw my graph using PathGeometry to get better performance and to achieve the clean rendering in Windows 8.1.
While creating the DeviceResources, i create the PathGeometry using the Factory interface. Then i set the Graph points to draw my graph in the output surface. Finally rendered ImageSource will be used as the source for my Image element in the XAML.
I just followed the below sample link to achieve my scenario.
http://code.msdn.microsoft.com/windowsapps/XAML-SurfaceImageSource-58f7e4d5
The above sample helped me a lot to get the ImageSource output from the Direct2D and finally used across the XAML/C# app.
Lets come to my problem. I use more than 24 graphs in the single Page of my Windows Store application. This graph allows the user to manipulate in Left and Right position and also allows to scale to the particular zoom level.
Hence whenever user tries to manipulate the graph , i just set the Translation and Scaling matrix to the TransformedPathGeometry instead of creating the new one for each and every time.
ID2D1TransformedGeometry *m_pTransformedGeometry;
pFactory->CreateTransformedGeometry(graphgeometry, combinedMatrix, &m_pTransformedGeometry);
Finally i draw the TransformedGeometry using the DrawGeometry method.
I checked my application using the Performance analysis tool in the VisualStudio2013. I could see that in the particular peek level, It takes more than 80% of running time to call the m_d2deviceContext->EndDraw() method. I attached the below screen shot to get more idea in this performance output.
Is there any way to increase this performance considerably ?
Could you please anyone help me in this ?
Regards,
David C
THere is a difference between slow performance and time spend.
If your draw method does more work than other parts, can mean that this method is slow but it also can mean that your other parts don't need a lot of cpu.
88.2% tells you only that you spend more time drawing that stuff than doing other stuff.
Use an timer to determine if your draw is slow.
"i just set the Translation and Scaling matrix to the
TransformedPathGeometry instead of creating the new one for each and
every time."
I hope you use world transform? Else you overwrite the pointer without releasing it before, providing a memory leak. You cannot change the matrix of a geometry directly, you have to recreate it each time, or if you can, you apply the transform to the entire world.

Localization of a robot using Kinect and EMGU(OpenCV wrapper)

I'm working on small WPF desktop app to track a robot. I have a Kinect for Windows on my desk and I was able to do the basic features and run the Depth camera stream and the RGB camera stream.
What I need is to track a robot on the floor but I have no idea where to start. I found out that I should use EMGU (OpenCV wrapper)
What I want to do is track a robot and find it's location using the depth camera. Basically, it's for localization of the robot using Stereo Triangulation. Then using TCP and Wifi to send the robot some commands to move him from one place to an other using both the RGB and Depth camera. The RGB camera will also be used to map the object in the area so that the robot can take the best path and avoid the objects.
The problem is that I have never worked with Computer Vision before and it's actually my first, I'm not stuck to a deadline and I'm more than willing to learn all the related stuff to finish this project.
I'm looking for details, explanation, hints, links or tutorials to achieve my need.
Thanks.
Robot localization is a very tricky problem and I myself have been struggling for months now, I can tell you what I have achieved But you have a number of options:
Optical Flow Based Odometery: (Also known as visual odometry):
Extract keypoints from one image or features (I used Shi-Tomashi, or cvGoodFeaturesToTrack)
Do the same for a consecutive image
Match these features (I used Lucas-Kanade)
Extract depth information from Kinect
Calculate transformation between two 3D point clouds.
What the above algorithm is doing is it is trying to estimate the camera motion between two frames, which will tell you the position of the robot.
Monte Carlo Localization: This is rather simpler, but you should also use wheel odometery with it.
Check this paper out for a c# based approach.
The method above uses probabalistic models to determine the robot's location.
The sad part is even though libraries exist in C++ to do what you need very easily, wrapping them for C# is a herculean task. If you however can code a wrapper, then 90% of your work is done, the key libraries to use are PCL and MRPT.
The last option (Which by far is the easiest, but the most inaccurate) is to use KinectFusion built in to the Kinect SDK 1.7. But my experiences with it for robot localization have been very bad.
You must read Slam for Dummies, it will make things about Monte Carlo Localization very clear.
The hard reality is, that this is very tricky and you will most probably end up doing it yourself. I hope you dive into this vast topic, and would learn awesome stuff.
For further information, or wrappers that I have written. Just comment below... :-)
Best
Not sure if is would help you or not...but I put together a Python module that might help.
http://letsmakerobots.com/node/38883#comments

Kinect gesture analysis

I'm doing a kinect Application using Official Kinect SDK.
The Result I want
1) able to identify the body have been waving for 5sec. Do something if it does
2) able to identify leaning with one leg for 5sec. do something if it does.
Anyone knows how to do so? I'm doing in a WPF application.
Would like to have some example. I'm rather new to Kinect.
Thanks in advance for all your help!
The Kinect provides you with the skeletons it's tracking, you have to do the rest. Basically you need to create a definition for each gesture you want, and run that against the skeletons every time the SkeletonFrameReady event is fired. This isn't easy.
Defining Gestures
Defining the gestures can be surprisingly difficult. The simplest (easiest) gestures are ones that happen at a single point in time, and therefore don't rely on past locations of the limbs. For example, if you want to detect when the user has their hand raised above their head, this can be checked on every individual frame. More complicated gestures need to take a period of time into account. For your waving gesture, you won't be able to tell from a single frame whether a person is waving or just holding their hand up in front of them.
So now you need to be able to store relevant information from the past, but what information is relevant? Should you keep a store of the last 30 frames and run an algorithm against that? 30 frames only gets you a second's worth of information.. perhaps 60 frames? Or for your 5 seconds, 300 frames? Humans don't move that fast, so maybe you could use every fifth frame, which would bring your 5 seconds back down to 60 frames. A better idea would be to pick and choose the relevant information out of the frames. For a waving gesture the hand's current velocity, how long it's been moving, how far it's moved, etc. could all be useful information.
After you've figured out how to get and store all the information pertaining to your gesture, how do you turn those numbers into a definition? Waving could require a certain minimum speed, or a direction (left/right instead of up/down), or a duration. However, this duration isn't the 5 second duration you're interested in. This duration is the absolute minimum required to assume that the user is waving. As mentioned above, you can't determine a wave from one frame. You shouldn't determine a wave from 2, or 3, or 5, because that's just not enough time. If my hand twitches for a fraction of a second, would you consider that a wave? There's probably a sweet spot where most people would agree that a left to right motion constitutes a wave, but I certainly don't know it well enough to define it in an algorithm.
There's another problem with requiring a user to do a certain gesture for a period of time. Chances are, not every frame in that five seconds will appear to be a wave, regardless of how well you write the definition. Where as you can easily determine if someone held their hand over their head for five seconds (because it can be determined on a single frame basis), it's much harder to do that for complicated gestures. And while waving isn't that complicated, it still shows this problem. As your hand changes direction at either side of a wave, it stops moving for a fraction of a second. Are you still waving then? If you answered yes, wave more slowly so you pause a little more at either side. Would that pause still be considered a wave? Chances are, at some point in that five second gesture, the definition will fail to detect a wave. So now you need to take into account a leniency for the gesture duration.. if the waving gesture occurred for 95% of the last five seconds, is that good enough? 90%? 80%?
The point I'm trying to make here is there's no easy way to do gesture recognition. You have to think through the gesture and determine some kind of definition that will turn a bunch of joint positions (the skeleton data) into a gesture. You'll need to keep track of relevant data from past frames, but realize that the gesture definition likely won't be perfect.
Consider the Users
So now that I've said why the five second wave would be difficult to detect, allow me to at least give my thoughts on how to do it: don't. You shouldn't force users to repeat a motion based gesture for a set period of time (the five second wave). It is surprisingly tiring and just not what people expect/want from computers. Point and click is instantaneous; as soon as we click, we expect a response. No one wants to have to hold a click down for five seconds before they can open Minesweeper. Repeating a gesture over a period of time is okay if it's continually executing some action, like using a gesture to cycle through a list - the user will understand that they must continue doing the gesture to move farther through the list. This even makes the gesture easier to detect, because instead of needing information for the last 5 seconds, you just need enough information to know if the user is doing the gesture right now.
If you want the user to hold a gesture for a set amount of time, make it a stationary gesture (holding your hand at some position for x seconds is a lot easier than waving). It's also a very good idea to give some visual feedback, to say that the timer has started. If a user screws up the gesture (wrong hand, wrong place, etc) and ends up standing there for 5 or 10 seconds waiting for something to happen, they won't be happy, but that's not really part of this question.
Starting with Kinect Gestures
Start small.. really small. First, make sure you know your way around the SkeletonData class. There are 20 joints tracked on each skeleton, and they each have a TrackingState. This tracking state will show whether the Kinect can actually see the joint (Tracked), if it is figuring out the joint's position based on the rest of the skeleton (Inferred), or if it has entirely abandoned trying to find the joint (NotTracked). These states are important. You don't want to think the user is standing on one leg simply because the Kinect doesn't see the other leg and is reporting a bogus position for it. Each joint has a position, which is how you know where the user is standing.. piece by piece. Become familiar with the coordinate system.
After you know the basics of how the skeleton data is reported, try for some simple gestures. Print a message to the screen when the user raises a hand above their head. This only requires comparing each hand to the Head joint and seeing if either hand is higher than the head in the coordinate plane. After you get that working, move up to something more complicated. I'd suggest trying a swiping motion (hand in front of body, moves either right to left or left to right some minimum distance). This requires information from past frames, so you'll have to think through what information to store. If you can get that working, you could try string a series of swiping gestures in a small amount of time and interpreting that as a wave.
tl;dr: Gestures are hard. Start small, build your way up. Don't make users do repetitive motions for a single action, it's tiring and annoying. Include visual feedback for duration based gestures. Read the rest of this post.
The Kinect SDK helps you get the coordinates of different joints. A gesture is nothing but change in position of a set of joints over a period of time.
To recognize gestures, you've to store the coordinates for a period of time and iterate through it to see if it obeys the rules for a particular gesture (such as - the right hand always moves upwards).
For more details, check out my blog post on the topic:
http://tinyurl.com/89o7sf5

Graphing variables in real time for debugging in C#

I'm working on a project in C# using XNA. It would be useful to have a way to graph the value of certain variables in real time as the game is running. (Just for debugging, I don't want the graphs in the final project.)
This seems like something that has probably been done before, but I can't find anything other than this:
http://devmag.org.za/2011/02/09/using-graphs-to-debug-physics-ai-and-animation-effectively/
I'm willing to write my own system or use someone else's. If there is an existing released project, I'd like to find it so I don't have to reinvent the wheel. If not, what;s the best way to handle this?
Thanks!
You could probably write a pretty easy system using XNA.
You should have something very basic for plotting lines:
public void DrawLine(Point first, Point second);
Run this system on every frame to collect the variable's value. Keep track of previous value.
Draw a line between previous value to current value.
If the sampling frequency is high enough, you can get a pretty decent looking graph like this.
You could write the values you want to check in a csv file, then draw a graph with excel. Here's a tutorial on writing a csv file writer/parser in C#.
The XNA hasn't support for drawing single line/pixels to screen directly (need to pass through a SpriteBatch). The only thing else I can imagine of is using a texture2D and draw your graph's pixels on it. You'll have to keep the texture in front of the camera though, with some strange matrix magics of which I don't actually know much. :P
The Xen graphics API has this feature. It appears to be for some built in variables, such as FPS, but it's all open source so I suspect it will be easy to add graphs of your own: http://xen.codeplex.com/
EDIT
I just remembered that Farseer physics also has code to do this http://farseerphysics.codeplex.com/. It's probably easier to rip out than the Xen graphs.

How to draw digital datas as a continuous signal in C#?

I am dealing with a project and for now I can get analog signals from my sensors and with PIC's ADC, convert them to digital datas. Besides, via USB I can transfer all the datas to my windows application (user interface) which is made by C#. When I look into my inputbuffer, all the datas there.
My problem is after these steps, how can I draw these datas as a continuous signal? I use ZedGraph and I want to observe my sensor datas as a continuous signal. I know how to draw something using ZedGraph. I even drew my inputbuffer only one time. But still I cannot manage it as a continuous signal.
Which architecture is more suitable for me? Should I use circular buffer?
Can I use PerformanceCounter class as my custom events such as drawing my sensor datas or this class is only useful for system events?
You could create a Performance Counter, that is easy to do.
Drawing the graph yourself is a bit harder:
(I take it you mean by continuous signal a line instead of points) Just draw a line from the previous position to the current position.
A circular buffer might help (although the oldest data should shift back in I think) but you still need to keep track of the previous position so you know how to draw the line. Make sure you shift the buffer in proportion to the time elapsed.
An alternative would be to be to push the data to an Excelsheet and use the charts there.
Can't you just assign a timestamp to each data point in your sample?
Then you should be able to easily plot the sample using time as X-axis and value as Y-axis with zed-graph, only need to adjust some parameters like line smoothing etc.

Categories

Resources