make player cutout image with kinect smoother,special anti aliasing - c#

I tried to cutout player image with using depth image of kinect but there are some problem with that , first when im using depthStreamWithPlayerIndex ,just 320x240 reslution
can used for depth stream , second problem is the function that retrive correct color pixel from depth pixel is works up to 640x480 , cause of these two problem cutouted image is not good if you want show on a high reslution, now i want to know is anyway to fix these two problem Or an algoritm to smooth output image? something like anti-aliasing ?

Couple of things I can think of.
If you want to even out the edges of the person, then you could do this:
Make a mask that is 255 where the player is, 0 everywhere else
Smooth the mask (using Gaussian blurring with an empirically determined parameter)
Use this mask when composing the original player image with the new background
You could replace the smoothing step with morphological operations (e.g. dilation, open/close).
Once you've put the player on the new background, you could "feather" the player edges to make them stand out a bit less:
Apply Canny operator to the edge mask from above
Dilate the mask. You now have a mask that covers the outside of the player
Blur the parts of the composed image that are under the mask

Related

Allow non-pixel art game handle pixel art (integer movement)

I'm making a 2D platform game engine with C# and MonoGame that uses floating point numbers for all of its maths. It works well so far, however I'd like to have the option of using the engine for a retro-style pixel art game, which is obviously best done with integer maths. Is there a simple way to achieve this?
The simplest method I can come up with is to do all the calculations with floating point numbers, but then when I draw each sprite, I round the position to the nearest multiple of the scale of the pixel art (for example, to the nearest 5 pixels for pixel art that is scaled 5x). This functions, but the movement of the player character on the screen doesn't feel smooth.
I've tried rounding the player position itself each time I update it, but this breaks my collision detection, causing the player to be stuck on the floor.
Maybe there's a standard way people achieve a solution?
Thanks :)
Apologies for resurrecting an ancient question, but I think a very simple way to do this for both programmer and hardware using GPU triangle rasterization (I'm assuming you're using a GPU pipeline, as this is trivial otherwise) that most directly simulates the look and feel of old hardware rendering at 200p or so is just render your entire game to an offscreen texture that is that actual 200p resolution and perform your entire game logic, including collision detection, at that resolution. You can shift all the coordinates by half a pixel which, combined with nearest neighbor sampling, should get them to plot precisely at a desired pixel if you have to work in floating point.
Then just draw a rectangle with the offscreen texture to the screen scaled to the full display resolution using nearest neighbor sampling and integer-sized scalars (2x, 3x, 5x, etc). That should be so much simpler than scaling each individual sprite and tile.
This is assuming you want a full retro look and feel where you can't draw things at what would normally be sub-pixel positions (1 pixel increments after scaling 5x instead of 5 pixel increments, or even sub-pixel at full resolution). If you want something that feels more modern where the scaled up art would be able to transform (move, rotate, scale) and animate in single-pixel increments or even sub-pixel with floating point, then you do need to scale up every individual sprite and tile. I don't think that would feel so retro and low-res though, more like very modern with scaled up blocky pixel art. Operating at the original low resolution tends to impact not just the look of the game but the feel as well.

How to animate alpha in an image in Unity?

Ok, I don't know if this is possible but I need to animate the alpha portion of an image - like I have an image with a shape cut out of it so the background shows through, and I want to animate the size of this hole in Unity;s animation controller.
I know it is possible to animate images by stringing together a series of different images like sprite animation, but I want to know if there is a way to animate or cut a hole in another image using an image in Unity and/or animate the alpha portion.
Is this possible? An example of an alpha image with hole cut in it:
And I would want to scale/animate that cut out inner square.
In unity, you can animate everything that can be changed in the inspector, including the size and color (including alpha) of your sprite.
However, it is not possible to animate pixels of a sprite seperately, as far as i know, meaning that you cannot change the alpha value of an area inside your sprite.
Do your images consist of a single color only or do you use complex images?
Well I guess the Shader you can find here will the closest answer to your question.
Other than that, operations on your texture will be quite heavy to perform at runtime.
Hope this helps,

Segmenting Kinect body arms

I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.

Comparing 2 bitmaps taken 1 second after another

I was thinking about writing a very simple program to compare 2 ARGB array pixel by pixel. Both images are same resolution, taken with the same camera source.
Since the camera is being hold still, I was expecting it's a fairly simple program to compare bitmap source by
Convert every pixel into Gray scale pixel
Literally compare each pixel from position 0 to N.
Have a isClose method to do an approximate +/- 3.
The result is that I have too much error bits. But when taking JPEG out of it and view it with naked eye, they seem to be identical (which is the case).
Why do you think I am seeing so much error when comparing them?
BTW - I am trying a write a very basic version of motion detection.
If you are tracking a known object you can pre-process your images before you compare them. For example, if it's a ball you're tracking and it appears brighter than its surroundings, you can threshold your greyscale image that will produce an image with only black or white. You then detect what are known as "contours" (see openCV documentation). Once you get the contour you are after (the ball) in any image, you can compare the location of it in each successive image. There are some algorithms that help figure out where a moving object will be next so it helps finding it in the next frame.
Without knowing exactly what you are doing, it's hard to give anything concrete.
And I see you're C#...maybe this will help: .Net (dotNet) wrappers for OpenCV?
b/c the pictures are not the same.
each one you pressed the button of the camera a little differently.
the change is "huge" if you compare pixel by pixel.
I'm not an expert on motion detection, but try to compare averages around a pixel - I think it will give you better results.

WPF 3D extrude "a bitmap"

I'm looking for a way to simulate a projector in wpf 3D :
i've these "in" parameters :
beam shape : a black and white bitmap file
beam size ( ex : 30 °)
beam color
beam intensity ( dimmer )
projector position (x,y,z)
beam position (pan(x),tilt(y) relative to projector)
First i was thinking of using light object but it seem that wpf can't do that
So, now i think that i can make for each projector a polygon from my bitmap...
First i need to convert the black and white bitmap to vector.
Only Simple shape ( bubble, line,dot,cross ...)
Is there any WPF way to do that ? Or maybe a external program file (freeware);
then i need to build the polygon, with the shape of the converted bitmap ,
color , size , orientation in parameter.
i don't know how can i defined the lenght of the beam , and if it can be infiny ...
To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall...
i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render...
Well so, i need sample for that, i guess that all of these things could be useful for other people
If you have sample code :
Convert Bitmap to vector
Extrude vectors from one point with a angle parameter until collision of a wall
Set x,y position of the beam depend of the projector position
Set Alpha intensity of the beam, color
Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-)
Thanks you
I would represent the "beam" as a light. I'd load the bitmap into a stencil buffer. You should be able to do this with OpenGL, DirectX, or XNA. AFAIK, WPF doesn't give access to the hardware for stencil buffers or shadows.
It seam to do "light patterns on the floor" there is two way
use a spotlight with a cookie. Or Projector with a custom shader that does additive blending.
or manually creating partially transparent polygons to simulate the "rays". and i need some example for one or the other case

Categories

Resources