Code to extrude 2d geometry to 3d - c#

is there any simple way to extrude a 2d geomtry (vectors ) to a 3d shape
assuming extruding parameter are lenght (double) and angle (degree)
so it should render like a cone ( all z lines going to one point )

(I'd make this a comment, but it's too big)
This isn't just an extruding problem
If it were, your original 2D image would produce either a cylinder with a series of holes in it (not really that useful unless you have a very sophisticated renderer doing either volumetrics or supporting transparency, and the polysort in that case would be very ugly) or 4 cylinders (if I extrude along the inner holes)
most extrusion algorithms don't deal with targeting a single point - that's more than extrusion, it's some form of raycasting
This looks suspiciously like a lighting issue - are you trying to do volumetric lighting, maybe show an effect where the light cone is and deal with the effect of a baffle in front of the light ? Or are you trying to compute the geometry that would define the shadow cast by the object in front of the light ?

Related

Moving the origin in a 2D Vector space

I'm trying to learn some more about Vectors in a 2D space and how to use them in Gamedevelopment.
I have created a small project for visualising a 'projection' of Vector A onto Vector B in C# using the Monogame framework.
This is all working fine, but now I want to move my origin (which is currently in the top-left) to a custom position. So i can for example draw my lines in the middle of the screen.
I want to do this without any help from the library first to understand what is happening.
But I cant figure out how to do this and if this is actually best practice in Vector spaces or that I should just 'draw' my lines with an offset..
My understanding of Math symbols and functions is not great, so if you provide me with a mathematically answers please explain the symbols aswell.
EDIT:
I created another project for visualising if a point is within a certain angle, but this time i tried to draw everything with an offset (right) next to the original vectors (left).
As you can see it looks fine if i draw it with an offset, but i can't imagine this method being used in Games.. Mainly because everything has a weird offset (duh..) with respect to my mouse, so you would need to implement your own cursor (which games do, but still...)
EDIT2:
Let's make my problem a little bit clearer..
If you look at my second example. Imagine the origin on the right to be an Agent (NPC or Player or whatever) and the segment BC (and BC2) to be it's vision field.
If i want to calculate what is within it's vision, i can do that the same way how i did the example but this 'origin' point would be at (0,0) (top-left) and that is behind the Agent.
I'm probably missing something obvious and thinking way too hard about this..
So i finally found out how this works..
Appearently you work with different spaces or frames instead of moving the origin (also called reference).
A space can live inside another space, but let's keep it simple for now with 2 spaces.
First space is your 'main' space (most of the time called world in Gamedevelopment)
Second space is your 'view' space (or camera)
(i use world and view throughout this answer)
I was doing all my Vector calculations inside my world space. So when drawing these vectors to the screen, they are drawn at the positions with respect to the world's reference (which is the top-left of the screen).
To draw my vectors somewhere else i need to translate them.
Translation is moving vectors along the axis.
This action of 'changing' the position/scale/rotation of a vector is called Transformation.
We can see transformations in a vector space simply as a change from one space to another.
quote
This translation is done by a Translation Matrix (more info in the quote link).
So with the knowledge of these spaces and transformation i fixed my program.
All my vectors are initialized the same way as before, but when i draw my vectors to the screen i translate them according to a pre-defined translation matrix. I call this matrix my viewMatrix because it translates vectors from the world space to the view space.
But there is one thing that needs fixing.
The vector pointA is not defined in the world space, but in the view space.
So that means that when my mouse is on position (20,20), that this position is different from the position (20,20) in my world sapce.
To fix this i need to translate my pointA vector with the invert of the translation matrix. This will convert the vector into a vector inside the world space.
So that's about it..
It took me 2 days to figure this out..
Here is a fixed version of the second example.
Left: my world space
Right: my view space
Notice how my mouse is now properly aligned in my view space instead of in my world space
Here are some resources i collected along the way:
Article - World, View and Projection Transformation Matrices
The True Power of the Matrix (Transformations in Graphics) - Computerphile
RB Whitaker - Basic Matrices
Making a Game Engine: Transformations

How do I convert a set of lines that are not intersecting into a maintainable polygon (triangle)?

I have an application I'm working on that requires a fair amount of 3D graphics programming. I have a series of lines that create both text and 3D cylindrical holes (see images).
I would like to be able to click and drag the objects in question using my mouse through the X,Y plane (Z constant). My understanding is that in order for the bounding boxes to be setup correctly, I have to have everything in using 3D polygons (triangles). I would like to be able to do collision detection without this conversion. Is this possible? If I must convert, can anyone point me to a piece of code that does this rather painlessly?
You can treat each line segment as a cylinder, and check them for collision.
Here's the math, as well as more alternatives.

Issue with alpha in 2d lighting shader in XNA 4.0

I'm currently learning HLSL with XNA, I figured the best place to start after tutorials would be some simple 2D shaders. I'm attempting to implement a simple lighting shader in 2D.
I draw the scene without shadows to a rendertarget, swap my rendertarget to a shadowmap, draw my light(each individually) onto the shadowmap via alpha channel, swap my rendertarget back to default and render the scene then the shadows on top.
The alpha of the light changes depending on the distance of the current pixel and the point of the light, this is all working fine for me except when I render the scene, if two lights overlap it causes a nasty blending issue.
I'm using alphablend when I draw on the shadowmap, and when I draw the shadowmap to the scene.
Am I just using the wrong blending settings here? I don't know much about blendstates.
Sorry if the question was vague.
I've had this happen before when doing software lighting, where your values exceed 255 (or 1.0) and you end up back in the realms of blackness. I believe the values are clamped using a modulo operation causing 1.1 to be 0.1 or 256 to be 1. I notice how the black ellipse is actually two spheres edges combined, this is what is giving me this conclusion.
Hope this gets you closer to understanding and finding out your problem. I have no idea what code you are using already but in your HLSL technique pass you could try adding:
AlphaBlendEnable = true;
SrcBlend = One;
DestBlend = One;

Rotating part of an image in 3D space

Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.

WPF 3D extrude "a bitmap"

I'm looking for a way to simulate a projector in wpf 3D :
i've these "in" parameters :
beam shape : a black and white bitmap file
beam size ( ex : 30 °)
beam color
beam intensity ( dimmer )
projector position (x,y,z)
beam position (pan(x),tilt(y) relative to projector)
First i was thinking of using light object but it seem that wpf can't do that
So, now i think that i can make for each projector a polygon from my bitmap...
First i need to convert the black and white bitmap to vector.
Only Simple shape ( bubble, line,dot,cross ...)
Is there any WPF way to do that ? Or maybe a external program file (freeware);
then i need to build the polygon, with the shape of the converted bitmap ,
color , size , orientation in parameter.
i don't know how can i defined the lenght of the beam , and if it can be infiny ...
To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall...
i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render...
Well so, i need sample for that, i guess that all of these things could be useful for other people
If you have sample code :
Convert Bitmap to vector
Extrude vectors from one point with a angle parameter until collision of a wall
Set x,y position of the beam depend of the projector position
Set Alpha intensity of the beam, color
Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-)
Thanks you
I would represent the "beam" as a light. I'd load the bitmap into a stencil buffer. You should be able to do this with OpenGL, DirectX, or XNA. AFAIK, WPF doesn't give access to the hardware for stencil buffers or shadows.
It seam to do "light patterns on the floor" there is two way
use a spotlight with a cookie. Or Projector with a custom shader that does additive blending.
or manually creating partially transparent polygons to simulate the "rays". and i need some example for one or the other case

Categories

Resources