I'm interested in how this is working and looked at the code. The important part is the warp matrix construction done using the computeSquareToQuad and computeQuadToSquare functions, but I don't understand them. Can you do an explanation or give some references about that?
These two methods are used for translating camera space coordination and display coordinates to each other (computeSquareToQuad for translating from camera coordinates to display and computeQuadToSquare for reverse of it),
When you look at the world through a camera, the result is a flat image and everything is distorted according to perspective rules. (for example squares transform into trapezoids). this distortion can be encapsulated by a warping matrix called a planar homography.
you essentially need a 3x3 matrix for calculation (note the normally 4x4 matrix is used because it can be easily integrated in 3D pipelines)
for more information have a look at
http://www.cs.utoronto.ca/~strider/vis-notes/tutHomography04.pdf
http://www.youtube.com/watch?v=fVJeJMWZcq8
Related
I'm working on a small painting program in which I can use the mouse to move/resize the shapes with handles on the corners. This already works well, except when the shape is rotated.
I need a translation between X- and Y-Coordinates. I've tried some sine/cosine calculations, but without success. Either I have fundamental errors in my formulas or the changes for X/Y in the MouseMove event are too small for this calculation.
Does anyone have experience with this topic or perhaps a few good links (maybe with examples)?
Thx in advance, Peter
Avoid using angles as much as possible, prefer to use transforms, i.e. matrices.
One example of this would be system.numerics.matrix3x2, this has methods to create transforms from angles, translations, scaling etc. Some important properties of matrices is that you can combine them and invert them. In addition to simply using them to transform a point.
It is also often useful to draw a matrix to visualize the transform, i.e. multiply a zero-vector, the x unit vector and y unit vector with the matrix, and draw some lines between these points, that should give you a good visualization of what the transform does.
While not absolutely required, some linear algebra knowledge is very useful when doing things like this.
I'm trying to learn some more about Vectors in a 2D space and how to use them in Gamedevelopment.
I have created a small project for visualising a 'projection' of Vector A onto Vector B in C# using the Monogame framework.
This is all working fine, but now I want to move my origin (which is currently in the top-left) to a custom position. So i can for example draw my lines in the middle of the screen.
I want to do this without any help from the library first to understand what is happening.
But I cant figure out how to do this and if this is actually best practice in Vector spaces or that I should just 'draw' my lines with an offset..
My understanding of Math symbols and functions is not great, so if you provide me with a mathematically answers please explain the symbols aswell.
EDIT:
I created another project for visualising if a point is within a certain angle, but this time i tried to draw everything with an offset (right) next to the original vectors (left).
As you can see it looks fine if i draw it with an offset, but i can't imagine this method being used in Games.. Mainly because everything has a weird offset (duh..) with respect to my mouse, so you would need to implement your own cursor (which games do, but still...)
EDIT2:
Let's make my problem a little bit clearer..
If you look at my second example. Imagine the origin on the right to be an Agent (NPC or Player or whatever) and the segment BC (and BC2) to be it's vision field.
If i want to calculate what is within it's vision, i can do that the same way how i did the example but this 'origin' point would be at (0,0) (top-left) and that is behind the Agent.
I'm probably missing something obvious and thinking way too hard about this..
So i finally found out how this works..
Appearently you work with different spaces or frames instead of moving the origin (also called reference).
A space can live inside another space, but let's keep it simple for now with 2 spaces.
First space is your 'main' space (most of the time called world in Gamedevelopment)
Second space is your 'view' space (or camera)
(i use world and view throughout this answer)
I was doing all my Vector calculations inside my world space. So when drawing these vectors to the screen, they are drawn at the positions with respect to the world's reference (which is the top-left of the screen).
To draw my vectors somewhere else i need to translate them.
Translation is moving vectors along the axis.
This action of 'changing' the position/scale/rotation of a vector is called Transformation.
We can see transformations in a vector space simply as a change from one space to another.
quote
This translation is done by a Translation Matrix (more info in the quote link).
So with the knowledge of these spaces and transformation i fixed my program.
All my vectors are initialized the same way as before, but when i draw my vectors to the screen i translate them according to a pre-defined translation matrix. I call this matrix my viewMatrix because it translates vectors from the world space to the view space.
But there is one thing that needs fixing.
The vector pointA is not defined in the world space, but in the view space.
So that means that when my mouse is on position (20,20), that this position is different from the position (20,20) in my world sapce.
To fix this i need to translate my pointA vector with the invert of the translation matrix. This will convert the vector into a vector inside the world space.
So that's about it..
It took me 2 days to figure this out..
Here is a fixed version of the second example.
Left: my world space
Right: my view space
Notice how my mouse is now properly aligned in my view space instead of in my world space
Here are some resources i collected along the way:
Article - World, View and Projection Transformation Matrices
The True Power of the Matrix (Transformations in Graphics) - Computerphile
RB Whitaker - Basic Matrices
Making a Game Engine: Transformations
I have an application I'm working on that requires a fair amount of 3D graphics programming. I have a series of lines that create both text and 3D cylindrical holes (see images).
I would like to be able to click and drag the objects in question using my mouse through the X,Y plane (Z constant). My understanding is that in order for the bounding boxes to be setup correctly, I have to have everything in using 3D polygons (triangles). I would like to be able to do collision detection without this conversion. Is this possible? If I must convert, can anyone point me to a piece of code that does this rather painlessly?
You can treat each line segment as a cylinder, and check them for collision.
Here's the math, as well as more alternatives.
I'm building with XNA a class that encapsulate a custom geometry defined in a arbitrary way. I want to this geometry moves, now, what is the best way to do it?
Is it enough bring all coordinates vertices and multiply them with the translate matrix using Vector3.Translate method? Or is it more efficient assign a geometry to an Effect (say BasicEffect) and then manipulate the world matrix?
Thanks!
I'm not that deep into xna right now, but speaking of 3D programming in general you should transform the world matrix to "move" the object's origin. This will be a lot faster than updating all vertices to their new coordinates (a LOT faster as the calculation can be offloaded to the graphics card). You should never modify/update geometry data just to move stuff around. This might be okay for complex animations, but (in my opinion) not for single translations.
I am programming various simulation tools in C#/.NET
What I am looking for is a high level visualization library; create a scene, with a camera with some standard controls, and render a few hunderd thousand spheres to it, or some wireframes. That kind of thing. If it takes more than one line to initialize a context, it deviates from my ideal.
Ive looked at slimDX, but its way lower level than im looking for (at least the documented parts, but I dont really care for any other). WPF perspective looked cool, but it seems targeted at static XAML defined scenes, and that doesnt really suit me either.
Basically, im looking for the kind of features languages like blitzbasic used to provide. Does that exist at all?
I'm also interested in this (as I'm also developing simulation tools) and ended up hacking together some stuff in XNA. It's definitely a lot more work than you've described, however. Note that anything you can do in WPF via XAML can also be done via code, as XAML is merely a representation of an object hierarchy and its relationships. I think that may be your best bet, though I don't have any metrics on what kind of performance you could expect with a few hundred thousand spheres (you're absolutely going to need some culling in that case and the culling itself may be expensive if you don't use optimizations like grid partitioning.)
EDIT: If you really need to support 100K entities and they can all be rendered as spheres, I would recommend that you bypass the 3d engine entirely and only use XNA for math. I would imagine an approach like the following:
Use XNA to set up Camera (View) and Perspective matrices. It has some handy Matrix static functions that make this easy.
Compute the Projection matrix and project all of your 'sphere' origin points to the viewing frustrum. This will give you X,Y screen coordinates and Z depth in the frustrum. You can either express this as 100K individual matrix multiplications or multiplication of the Projection matrix by a single 3 x 100K element matrix. In the former case, this is a great candidate for parallelism using the new .NET 4 Parallel functionality.
If you find that the 100K matrix multplications are a problem, you can reduce this significantly by performing culling of points before transformation if you know that only a small subset of them will be visible at a given time. For instance, you can invert the Projection matrix to find the bounds of your frustrum in your original space and create an axis-aligned bounding box for the frustrum. You can then exclude all points outside this box (simple comparison tests in X, Y and Z.) You only need to recompute this bounding box when the Projection matrix changes, so if it changes infrequently, this can be a reasonable optimization.
Once you have your transformed points, clip any outside the frustum (Z < 0, Z > maxDist, X<0, Y<0, X>width, Y>height). You can now render each point by drawing a filled circle, with its radius proportional to Z (Z=0 would have largest radius and Z=maxDist would probably fade to a single point.) If you want to provide a sense of shading/depth, you can render with a shaded brush to very loosely emulate lighting on spheres. This works because everything in your scene is a sphere and you're presumably not worried about things like shadows. All of this would be fairly easy to do in WPF (including the Shaded Brush), but be sure to use DrawingVisual classes and not framework elements. Also, you'll need to make sure you draw in the correct Z order, so it helps if you store the transformed points in a data structure that sorts as you add.
If you're still having performance problems, there are further optimizations you can pursue. For instance, if you know that only a subset of your points are moving, you can cache the transformed locations for the immobile points. It really depends on the nature of your data set and how it evolves.
Since your data set is so large, you might consider changing the way you visualize it. Instead of rendering 100K points, partition your working space into a volumetric grid and record the number (density) of points inside each grid cube. You can Project only the center of the grid and render it as a 'sphere' with some additional feedback (like color, opacity or brush texture) to indicate the point density. You can combine this technique with the traditional rendering approach, by rendering near points as 'spheres' and far points as 'cluster' objects with some brush patterning to match the density. One simple algorithm is to consider a bounding sphere around the camera; all points inside the sphere will be transformed normally; beyond the sphere, you will only render using the density grid.
Maybe the XNA Game studio is what you are looking for.
Also take a look at DirectX.
WPF perspective looked cool, but it seems targeted at static XAML defined scenes
Look again, WPF can be as dynamic as you will ever need.
You can write any WPF program, including 3D, totally without XAML.
Do you have to use C#/.Net or would MonoDevelop be good enough? I can recomend http://unity3d.com/ if you want a powerful 3D-engine.