I'm looking for a way to simulate a projector in wpf 3D :
i've these "in" parameters :
beam shape : a black and white bitmap file
beam size ( ex : 30 °)
beam color
beam intensity ( dimmer )
projector position (x,y,z)
beam position (pan(x),tilt(y) relative to projector)
First i was thinking of using light object but it seem that wpf can't do that
So, now i think that i can make for each projector a polygon from my bitmap...
First i need to convert the black and white bitmap to vector.
Only Simple shape ( bubble, line,dot,cross ...)
Is there any WPF way to do that ? Or maybe a external program file (freeware);
then i need to build the polygon, with the shape of the converted bitmap ,
color , size , orientation in parameter.
i don't know how can i defined the lenght of the beam , and if it can be infiny ...
To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall...
i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render...
Well so, i need sample for that, i guess that all of these things could be useful for other people
If you have sample code :
Convert Bitmap to vector
Extrude vectors from one point with a angle parameter until collision of a wall
Set x,y position of the beam depend of the projector position
Set Alpha intensity of the beam, color
Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-)
Thanks you
I would represent the "beam" as a light. I'd load the bitmap into a stencil buffer. You should be able to do this with OpenGL, DirectX, or XNA. AFAIK, WPF doesn't give access to the hardware for stencil buffers or shadows.
It seam to do "light patterns on the floor" there is two way
use a spotlight with a cookie. Or Projector with a custom shader that does additive blending.
or manually creating partially transparent polygons to simulate the "rays". and i need some example for one or the other case
Related
I am creating realtime scene in XNA, it is 2D using sprites only (rendered on quads, standard spritebatch with alpha map on sprites). I would like to create create simply lens flare, actually only occlusion around light source (I don´t need direction to center of camera to offset multiple sprites for lens flare, etc.) Only thing I basically need is to calculate how many pixels from light source sprite (small star) are rendered and according to it set scale of lens flare sprite (so scale 0 if sprite there are not visible pixels from relevant sprite).
I know how to do it in 3D, I read through this and tested few things:
http://my.safaribooksonline.com/book/programming/game-programming/9781849691987/1dot-applying-special-effects/id286698039
I would like to ask what is best and cheapest way to do it in 2D scene (counting how many pixels of sprite were rendered / occluded with per pixel precision or something comparable).
I know also stencil buffer could help but I am not sure how to applicate in this case.
Okay, two ways how to solve it, either kinda old school approach, using stencil to calculate count of occluded pixels and scale sprites of lens flare according to it.
Other way: modern approach, use screen space lens flare, isolate bright pixels (I recommend HDR rendering pipeline and use values of brightness above 1.0 to generate lens flares, but it depends on scene average and maximum) and generate ghosts, like so:
https://www.youtube.com/watch?v=_A0nKfzbs80&list=UUywPlxpmCZtuqOs6_bZEG9A
Kinect: How to draw bones with PNG picture instead of DrawLine?
I want the result like this http://www.hotzehwc.com/Resource-Center/Wellness-101/skeleton2.aspx
I will get the joints positions from Kinect.
JointA.x;
JointA.y;
JointB.x;
JointB.y;
The joints positions will change, so the PNG connects between two joints needs to resize and rotate.
Any sample code to make this easier?
Ideally you would want to use DrawLine, and other internal draw functions, so that you can scale your bones appropriately. It's just a lot harder to get them to look right at first.
Using images you would want to cut them up into their individual pieces. The Kinect has a series of joints which the connecting lines would be the bones. First check out the SkeletonBasics-WPF example from the SDK Toolkit, supplied by Microsoft -- it will show you have they construct bones between the joints.
Now, you want to cut up your skeleton image in such a way that you have 1 bone per image. Create an Image object in your XAML for each image. Figure out where the joints belong in your images -- the elbow, for example, will be close to the bottom of the humerus image, but might be a few pixels into the image, and would towards the middle (width wise).
When you get the joint positions from the skeleton, translate the appropriate coordinates from the images into those positions. It is going to be a lot of math! You'll get the joints for the given bone and then calculate how to translate the bone image to the correct position and angle.
Is it possible to retrieve the texture coordinates of an object, for example through hittesting?
As an example: I use a 1920x1080 texture on a simple plane, and I want to get the coordinates 1920, 1080 if I click in the right bottom. (The model is in reality slightly more complex, so trying to calculate the position via math isn't as easy)
When math does not work for some reasons, I used to do the following graphic hit-test: assign unique colors to each texel of your plane, then do one frame rendering to an offscreen surface with lighthing and effects disabled, then read pixel color under the cursor and translate its value back to coordinates. This is quite efficient on complex models when you don't need to do such lookups too often (say, games), because reading pixels back will stop graphics hardware pipeline and drain the performance. Also, this potentially would work with any projections: ortho or perspective.
I tried to cutout player image with using depth image of kinect but there are some problem with that , first when im using depthStreamWithPlayerIndex ,just 320x240 reslution
can used for depth stream , second problem is the function that retrive correct color pixel from depth pixel is works up to 640x480 , cause of these two problem cutouted image is not good if you want show on a high reslution, now i want to know is anyway to fix these two problem Or an algoritm to smooth output image? something like anti-aliasing ?
Couple of things I can think of.
If you want to even out the edges of the person, then you could do this:
Make a mask that is 255 where the player is, 0 everywhere else
Smooth the mask (using Gaussian blurring with an empirically determined parameter)
Use this mask when composing the original player image with the new background
You could replace the smoothing step with morphological operations (e.g. dilation, open/close).
Once you've put the player on the new background, you could "feather" the player edges to make them stand out a bit less:
Apply Canny operator to the edge mask from above
Dilate the mask. You now have a mask that covers the outside of the player
Blur the parts of the composed image that are under the mask
is there any simple way to extrude a 2d geomtry (vectors ) to a 3d shape
assuming extruding parameter are lenght (double) and angle (degree)
so it should render like a cone ( all z lines going to one point )
(I'd make this a comment, but it's too big)
This isn't just an extruding problem
If it were, your original 2D image would produce either a cylinder with a series of holes in it (not really that useful unless you have a very sophisticated renderer doing either volumetrics or supporting transparency, and the polysort in that case would be very ugly) or 4 cylinders (if I extrude along the inner holes)
most extrusion algorithms don't deal with targeting a single point - that's more than extrusion, it's some form of raycasting
This looks suspiciously like a lighting issue - are you trying to do volumetric lighting, maybe show an effect where the light cone is and deal with the effect of a baffle in front of the light ? Or are you trying to compute the geometry that would define the shadow cast by the object in front of the light ?