I'm trying to learn some more about Vectors in a 2D space and how to use them in Gamedevelopment.
I have created a small project for visualising a 'projection' of Vector A onto Vector B in C# using the Monogame framework.
This is all working fine, but now I want to move my origin (which is currently in the top-left) to a custom position. So i can for example draw my lines in the middle of the screen.
I want to do this without any help from the library first to understand what is happening.
But I cant figure out how to do this and if this is actually best practice in Vector spaces or that I should just 'draw' my lines with an offset..
My understanding of Math symbols and functions is not great, so if you provide me with a mathematically answers please explain the symbols aswell.
EDIT:
I created another project for visualising if a point is within a certain angle, but this time i tried to draw everything with an offset (right) next to the original vectors (left).
As you can see it looks fine if i draw it with an offset, but i can't imagine this method being used in Games.. Mainly because everything has a weird offset (duh..) with respect to my mouse, so you would need to implement your own cursor (which games do, but still...)
EDIT2:
Let's make my problem a little bit clearer..
If you look at my second example. Imagine the origin on the right to be an Agent (NPC or Player or whatever) and the segment BC (and BC2) to be it's vision field.
If i want to calculate what is within it's vision, i can do that the same way how i did the example but this 'origin' point would be at (0,0) (top-left) and that is behind the Agent.
I'm probably missing something obvious and thinking way too hard about this..
So i finally found out how this works..
Appearently you work with different spaces or frames instead of moving the origin (also called reference).
A space can live inside another space, but let's keep it simple for now with 2 spaces.
First space is your 'main' space (most of the time called world in Gamedevelopment)
Second space is your 'view' space (or camera)
(i use world and view throughout this answer)
I was doing all my Vector calculations inside my world space. So when drawing these vectors to the screen, they are drawn at the positions with respect to the world's reference (which is the top-left of the screen).
To draw my vectors somewhere else i need to translate them.
Translation is moving vectors along the axis.
This action of 'changing' the position/scale/rotation of a vector is called Transformation.
We can see transformations in a vector space simply as a change from one space to another.
quote
This translation is done by a Translation Matrix (more info in the quote link).
So with the knowledge of these spaces and transformation i fixed my program.
All my vectors are initialized the same way as before, but when i draw my vectors to the screen i translate them according to a pre-defined translation matrix. I call this matrix my viewMatrix because it translates vectors from the world space to the view space.
But there is one thing that needs fixing.
The vector pointA is not defined in the world space, but in the view space.
So that means that when my mouse is on position (20,20), that this position is different from the position (20,20) in my world sapce.
To fix this i need to translate my pointA vector with the invert of the translation matrix. This will convert the vector into a vector inside the world space.
So that's about it..
It took me 2 days to figure this out..
Here is a fixed version of the second example.
Left: my world space
Right: my view space
Notice how my mouse is now properly aligned in my view space instead of in my world space
Here are some resources i collected along the way:
Article - World, View and Projection Transformation Matrices
The True Power of the Matrix (Transformations in Graphics) - Computerphile
RB Whitaker - Basic Matrices
Making a Game Engine: Transformations
Related
This question already has answers here:
Non-Affine image transformations in .NET
(3 answers)
Closed 1 year ago.
I need to combine two images in C# ( 4.7.2 ), and have the top image transformed putting each of the four corners at specific coordinates in the image.
Is that possible? Preferably with a solution that doesn't require spending a ton of money. As far as I can tell i can't do it with the Bitmap/Graphics classes.
Image of what I'm trying to do
Shear (or skew), which is what an affine transform such as used in GDI+ or WPF, is unlikely to do what you want, if I understand the question correctly. With shear/skew the transformed coordinate space is still a parallelogram, whereas in your image, the original rectangle is squeezed or stretched arbitrarily.
Assuming that's correct, I would recommend using the features in the WPF Media3D namespace (WPF, simply because it's the most accessible 3D API in the .NET context). In particular, you will want to define a texture that is your original bitmap. Then you will want to define a quadrilateral 2D surface in 3D coordinate space with sufficient resolution (i.e. triangles) for your purposes (see below), and where the triangles in that surface are constructed by tessellating the shape that you want as your final image, and where you've interpolated the texture (UV) coordinates for that shape across the vertexes that result from the tessllation.
How many triangles you actually want depends on the desired quality. In theory, you could use just two. This is the simplest approach, and determining the UV coordinates is trivial, because you only have your original four corners. But there will be a visual discontinuity along the diagonal where the two triangles meet, where the interpolation of the texture pixels changes direction due to the triangles not being square to each other.
For better results, you'll need to use more triangles. But then this complicates the assignment of the UV coordinates. For each inner vertex of this surface, you'll need to interpolate across the surface. This is probably easier to do if you generate the tessellation in the first place by subdividing the quadrilateral with lines connecting opposite sides (which will form smaller interior quadrilaterals bounded by intersecting lines) and then just divide each of those quadrilaterals into pairs of triangles. If you do it this way, then you can use the distance along each line to determine the appropriate U or V coordinate at each vertex that line goes through.
Having created the appropriate texture and geometry, it's a simple matter to render the result into a RenderTargetBitmap via the Viewport3DVisual class, and then do whatever you want with that bitmap.
Now, all that said…
If it turns out that your problem can be simplified such that shear/skew is sufficient for your needs, you can look at De-skew characters in binary image for help with that. In that particular example, they are trying to undo skew caused by optical effects, but skewing is skewing; the same exact principle works in either direction.
Even if your problem is not amenable to shear/skew approaches, before you implement your own solution (e.g. based on my outline above), you may want to look at other available tools. Information about some options can be found in, for example, Image Modification (cropping and de-skewing) in C# and Image comparison - rotation, alignment and scaling.
I have an application I'm working on that requires a fair amount of 3D graphics programming. I have a series of lines that create both text and 3D cylindrical holes (see images).
I would like to be able to click and drag the objects in question using my mouse through the X,Y plane (Z constant). My understanding is that in order for the bounding boxes to be setup correctly, I have to have everything in using 3D polygons (triangles). I would like to be able to do collision detection without this conversion. Is this possible? If I must convert, can anyone point me to a piece of code that does this rather painlessly?
You can treat each line segment as a cylinder, and check them for collision.
Here's the math, as well as more alternatives.
I am attempting to create a function taking a plane in 3d space, and returning a plane which will fit in its entirety inside one section of a grid on the screen.
The grid on the screen is fixed and is defined by either divisions in X and Y, or by a set of lines across the screen.
The original plane can be any size or orientation on the screen, though it will never take the whole screen.
I am working in Unity3.5.2f2 with C#. I have posted this on SO as it is quite heavily math based as opposed to just Unity general knowledge. Ideally a solution will not use external libraries, though it is a possibility.
I have a few methods in mind and would appreciate any input;
Project the plane to screen space, get the min/max x and y values of the mesh, (bounding box), use this to calculate a scale xform (using difference in height/length of mesh to that of a screen division). Re-project into world space, after snapping two edges of the mesh to a selected division.
As the divisions are rectangular in nature, create several view frustums, and come up with some method of scaling/translating the plane in 3d space to fit the frustum.
Function prototype would be;
Plane adjustPlaneToFitScreens(Plane _plane)
Any thoughts?
I solved this issue using method 01. above. Unity provided several handy functions making the math easy, and calculating scaling and translation in pixel/screen space was far easier than in 3d space while having to take into account view angle / FOV.
There are issues with the re-projection into world after the scaling, but this particular application doesnt have the camera moving when viewing the scaled object, so the issues are not actually noticeable in black box
I'm making a game in C# and XNA, and I was trying to come up with a method to render massive terrains without using a tremendous amount of memory or passing the poly limit hard-coded into XNA.
My solution so far is to create a massive heightmap, and that heightmap is loaded into memory at the beginning of the game in the initialization phase. Then, terrain is only generated nearest to the camera. This is accomplished by projecting a triangle whose vertex is the character and the other two endpoints extend to the sides of the character's viewing area. Then, all the pixels inside that triangle on the heightmap are rendered and drawn into the game, thus only rendering what is seen.
The problem is, I've successfully found (I think, can't test until I get terrain rendering) the three vertices of the triangle. Now I need to find a list of the coordinates for every single pixel inside that triangle - whole numbers only, because I just need a list of pixels to render.
I know it sounds a little confusing, so here's the gist of it:
I have an image, and I project a triangle onto that image. The only thing I know about that triangle are the three vertices. I need a list of the pixels inside that triangle.
I've been Googling around for maybe 20 minutes now, and I figured I midas well go ahead and post something here due to the fact that what I'm trying to do isn't all that common. If I find an answer, I'll be sure to post it here.
But until then, can anyone tell me how to accomplish this?
Edit: A formula, please. If you can provide a formula or algorithm, and an explanation, that would be just perfect.
Edit: I've posted a new question, as I've ditched this method of rendering large terrains. The question is here.
Start here:
http://mathworld.wolfram.com/TriangleInterior.html
One of the non-trivial problems, not mentioned there, that you have to deal with is the pixelization along the boundary.
Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.