Kinect: How to draw bones with PNG picture instead of DrawLine?
I want the result like this http://www.hotzehwc.com/Resource-Center/Wellness-101/skeleton2.aspx
I will get the joints positions from Kinect.
JointA.x;
JointA.y;
JointB.x;
JointB.y;
The joints positions will change, so the PNG connects between two joints needs to resize and rotate.
Any sample code to make this easier?
Ideally you would want to use DrawLine, and other internal draw functions, so that you can scale your bones appropriately. It's just a lot harder to get them to look right at first.
Using images you would want to cut them up into their individual pieces. The Kinect has a series of joints which the connecting lines would be the bones. First check out the SkeletonBasics-WPF example from the SDK Toolkit, supplied by Microsoft -- it will show you have they construct bones between the joints.
Now, you want to cut up your skeleton image in such a way that you have 1 bone per image. Create an Image object in your XAML for each image. Figure out where the joints belong in your images -- the elbow, for example, will be close to the bottom of the humerus image, but might be a few pixels into the image, and would towards the middle (width wise).
When you get the joint positions from the skeleton, translate the appropriate coordinates from the images into those positions. It is going to be a lot of math! You'll get the joints for the given bone and then calculate how to translate the bone image to the correct position and angle.
Related
I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.
I have a 15 x 15 pixel box, that I draw several off in different colours using:
spriteBatch.Draw(texture, position, colour);
What I'd like to do is draw a one pixel line around the outside, in different colours, thus making it a 17 x 17 box, with (for example), a blue outline one pixel wide and a grey middle.
The only way I can think of doing it is to draw two boxes, one 17x17 in the outline colour, one 15x15 with the box colour, and layer them to give the appearance of an outline:
spriteBatch.Draw(texture17by17, position, outlineColour);
spriteBatch.Draw(texture15by15, position, boxColour);
Obviously the position vector would need to be modified but I think that gives a clear picture of the idea.
The question is: is there a better way?
You can draw lines and triangles using DrawUserIndexedPrimitives, see Drawing 3D Primitives using Lists or Strips on MSDN for more details. Other figures like rectangles and circles are constructed from lines, but you'll need to implement them yourself.
To render lines in 2D, just use orthographic projection which mirrors transformation matrix from SpriteBatch.
You can find a more complete example with the PrimitiveBatch class which encapsulates the logic of drawing in the example Primitives from XBox Live Indie Games.
Considering XNA can't draw "lines" like OpenGL immediate mode can, it is far more efficient to draw a spite with a pre-generated texture quad (2 triangles) than to draw additional geometry with dynamic texturing particularly when a single "line" each requiring 1 triangle; 2 triangles vs 4 respectfully. Less triangles and vertices in the former too.
So I would not try to draw a "thin" line using additional geometry that is trying to mimic lines around the outside of the other, instead continue with what you are doing - drawing 2 different sprites (each is a quad anyway)
Every object drawn in 3D is drawn using triangles. - Would you like to know more?
I'm new to images processing. Now I have a problem.
I'm writing a simple program on C#, that have to define some objects on images through some samples.
For example here's the sample:
Later I have to compare with it objects that I find on a loadable image.
Sizes of objects and samples are always equal. Images are binarized. We always know rotation point (it's the image's center). Samples are always normalized, but we never know object's rotation angle relative to the normal.
Here are some objects that I find on the loadable image:
The question is how to find angle #1.
Sorry for my English and thanks.
if you are using aforge libraries, you can utilize his extension too, named Accord.Net.
Accord.Net is similar at Aforge, you install it, add the references at your project and you are done.
After that you can use the simply the RawMoments by passing the target image, and after you can use them to compute CentralMoments
At this point you can get the angle of your image with the method of CentralMoments GetOrientation() and you get the angle.
I used it on an hand-gestures recognition project and worked like a charm.
UPDATE:
I have just checked that GetOrientation get only the angle but not the direction.
So an upside-down image have the same angle of the original.
A fix, can be the pixel counting, but this time you will get only 2 (worst case) samples to check and not 360 (worst case) samples.
Update2
If you have a lot of samples, i suggest you to filter them with the size of the rotated image.
Example:
I get the image, i see that is in a Horizontal position (90°) i rotate it of 90° and now i have the original width and heigth that i can utilize to skip the samples that are not similar, like:
If (Founded.Width != Sample.Width) //You can add a range too if in case during
Continue; //the rotation are added some pixels
To recap, you have a sample image and a rotated image of the same source image. You also have two values 0,1 for the pixels.
A simple pseudo-code that can yield moderate success can be implemented using a binary search :
Start with a value for the rotation to be 180 degress - both clockwise and counter-clockwise
Rotate the image to both values.
XOR the original image from the rotated one.
Count the number of 0 pixels and check if it's less than the margin of error you define.
Continue the search with half of the rotation angle.
look at this
Rotation angle of scanned document
Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.
I'm looking for a way to simulate a projector in wpf 3D :
i've these "in" parameters :
beam shape : a black and white bitmap file
beam size ( ex : 30 °)
beam color
beam intensity ( dimmer )
projector position (x,y,z)
beam position (pan(x),tilt(y) relative to projector)
First i was thinking of using light object but it seem that wpf can't do that
So, now i think that i can make for each projector a polygon from my bitmap...
First i need to convert the black and white bitmap to vector.
Only Simple shape ( bubble, line,dot,cross ...)
Is there any WPF way to do that ? Or maybe a external program file (freeware);
then i need to build the polygon, with the shape of the converted bitmap ,
color , size , orientation in parameter.
i don't know how can i defined the lenght of the beam , and if it can be infiny ...
To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall...
i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render...
Well so, i need sample for that, i guess that all of these things could be useful for other people
If you have sample code :
Convert Bitmap to vector
Extrude vectors from one point with a angle parameter until collision of a wall
Set x,y position of the beam depend of the projector position
Set Alpha intensity of the beam, color
Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-)
Thanks you
I would represent the "beam" as a light. I'd load the bitmap into a stencil buffer. You should be able to do this with OpenGL, DirectX, or XNA. AFAIK, WPF doesn't give access to the hardware for stencil buffers or shadows.
It seam to do "light patterns on the floor" there is two way
use a spotlight with a cookie. Or Projector with a custom shader that does additive blending.
or manually creating partially transparent polygons to simulate the "rays". and i need some example for one or the other case