I am programming various simulation tools in C#/.NET
What I am looking for is a high level visualization library; create a scene, with a camera with some standard controls, and render a few hunderd thousand spheres to it, or some wireframes. That kind of thing. If it takes more than one line to initialize a context, it deviates from my ideal.
Ive looked at slimDX, but its way lower level than im looking for (at least the documented parts, but I dont really care for any other). WPF perspective looked cool, but it seems targeted at static XAML defined scenes, and that doesnt really suit me either.
Basically, im looking for the kind of features languages like blitzbasic used to provide. Does that exist at all?
I'm also interested in this (as I'm also developing simulation tools) and ended up hacking together some stuff in XNA. It's definitely a lot more work than you've described, however. Note that anything you can do in WPF via XAML can also be done via code, as XAML is merely a representation of an object hierarchy and its relationships. I think that may be your best bet, though I don't have any metrics on what kind of performance you could expect with a few hundred thousand spheres (you're absolutely going to need some culling in that case and the culling itself may be expensive if you don't use optimizations like grid partitioning.)
EDIT: If you really need to support 100K entities and they can all be rendered as spheres, I would recommend that you bypass the 3d engine entirely and only use XNA for math. I would imagine an approach like the following:
Use XNA to set up Camera (View) and Perspective matrices. It has some handy Matrix static functions that make this easy.
Compute the Projection matrix and project all of your 'sphere' origin points to the viewing frustrum. This will give you X,Y screen coordinates and Z depth in the frustrum. You can either express this as 100K individual matrix multiplications or multiplication of the Projection matrix by a single 3 x 100K element matrix. In the former case, this is a great candidate for parallelism using the new .NET 4 Parallel functionality.
If you find that the 100K matrix multplications are a problem, you can reduce this significantly by performing culling of points before transformation if you know that only a small subset of them will be visible at a given time. For instance, you can invert the Projection matrix to find the bounds of your frustrum in your original space and create an axis-aligned bounding box for the frustrum. You can then exclude all points outside this box (simple comparison tests in X, Y and Z.) You only need to recompute this bounding box when the Projection matrix changes, so if it changes infrequently, this can be a reasonable optimization.
Once you have your transformed points, clip any outside the frustum (Z < 0, Z > maxDist, X<0, Y<0, X>width, Y>height). You can now render each point by drawing a filled circle, with its radius proportional to Z (Z=0 would have largest radius and Z=maxDist would probably fade to a single point.) If you want to provide a sense of shading/depth, you can render with a shaded brush to very loosely emulate lighting on spheres. This works because everything in your scene is a sphere and you're presumably not worried about things like shadows. All of this would be fairly easy to do in WPF (including the Shaded Brush), but be sure to use DrawingVisual classes and not framework elements. Also, you'll need to make sure you draw in the correct Z order, so it helps if you store the transformed points in a data structure that sorts as you add.
If you're still having performance problems, there are further optimizations you can pursue. For instance, if you know that only a subset of your points are moving, you can cache the transformed locations for the immobile points. It really depends on the nature of your data set and how it evolves.
Since your data set is so large, you might consider changing the way you visualize it. Instead of rendering 100K points, partition your working space into a volumetric grid and record the number (density) of points inside each grid cube. You can Project only the center of the grid and render it as a 'sphere' with some additional feedback (like color, opacity or brush texture) to indicate the point density. You can combine this technique with the traditional rendering approach, by rendering near points as 'spheres' and far points as 'cluster' objects with some brush patterning to match the density. One simple algorithm is to consider a bounding sphere around the camera; all points inside the sphere will be transformed normally; beyond the sphere, you will only render using the density grid.
Maybe the XNA Game studio is what you are looking for.
Also take a look at DirectX.
WPF perspective looked cool, but it seems targeted at static XAML defined scenes
Look again, WPF can be as dynamic as you will ever need.
You can write any WPF program, including 3D, totally without XAML.
Do you have to use C#/.Net or would MonoDevelop be good enough? I can recomend http://unity3d.com/ if you want a powerful 3D-engine.
Related
I'm working on a small painting program in which I can use the mouse to move/resize the shapes with handles on the corners. This already works well, except when the shape is rotated.
I need a translation between X- and Y-Coordinates. I've tried some sine/cosine calculations, but without success. Either I have fundamental errors in my formulas or the changes for X/Y in the MouseMove event are too small for this calculation.
Does anyone have experience with this topic or perhaps a few good links (maybe with examples)?
Thx in advance, Peter
Avoid using angles as much as possible, prefer to use transforms, i.e. matrices.
One example of this would be system.numerics.matrix3x2, this has methods to create transforms from angles, translations, scaling etc. Some important properties of matrices is that you can combine them and invert them. In addition to simply using them to transform a point.
It is also often useful to draw a matrix to visualize the transform, i.e. multiply a zero-vector, the x unit vector and y unit vector with the matrix, and draw some lines between these points, that should give you a good visualization of what the transform does.
While not absolutely required, some linear algebra knowledge is very useful when doing things like this.
So I've asked this question before, it has to do with tilemaps and layer order sorting. I've gathered more knowledge and understanding of the subject now although i dont know what the right solution.
I want to make a 2d pixel game where the setting is top-down. I have borders with collision on the sides of my map, i have tree objects and many more with different dimensions in appearance etc..
the problem lies with the layer order sorting, imagine if the borders were tree leaves and the player should walk under them, and the tree objects in the middle of the map and such, would need to allow the player to walk in front of and behind it, but also on the sides as the player can walk under the tree's leaves a little.
As I've said before, I've been experimenting with this for a while now and i've tried to use single objects containing 4 sprites in a 2x2 pattern to count for one big sprite. I've also tweaked the Transparency Sort Mode and Axis, used the differences of the tilemap render modes etc.
The layer sorting would work just perfectly IF I would only use sprites that were 1 tile big, so basically only small rocks and boxes would work in the final result.
I would love to know what the best way to handle layer order sorting is where it would work with 2x2 large tiles (like trees, big rocks, houses even) and what the best workflow for this would be.
The best way to do this, would be to create a main grid containing several Tilemaps.
Then set each of the Tilemaps to a certain layer, the higher the layer the more priority it has over other Tilemaps. Resulting in the Tilemap with the highest priority being displayed ontop of all others.
Edit: Bad angle picture was due to Depth Testing. Still need the 'right' way to do 3d text rendering though!
I got 2D text rendering in OpenTK working. It's very simple, I use .NET Graphics class methods to draw a string to Bitmap, which I can load into GPU via OpenTK. Ok great, but how about 3D?
I'm thinking about this in terms of a cylinder. What is a cylinder? It's just a circle stretched out over a certain height. That's EXACTLY what I want to do here! I researched a bunch...but surprisingly there isn't all that much info readily available IMO for such a basic task.
Here's what I've tried:
1) Rendering the bitmap 100 times from Z = 0.0f to 1.0f. This actually works pretty well! For certain rotations anyway.
2) Drawing 16x16x16 Voxels (well, I think I'm drawing voxels). Basically the idea is, use the typical GL.TexCoord3 and GL.Vertex3 methods for drawing the SURFACE of a cube, but because we are drawing so freakin many of them, I figured it would actually give depth to my text. It sort of does, but the results are actually worse than attempt 1.
I want to get this working with a really simple solution, if one exists. I'm using Immediate mode, and if possible I'd like to keep using that.
This is what solution 1 looks like at a good angle:
Bad Angle:
I know that my method is inherently flawed because these bitmaps dont actually have a depth when I draw them, which is why at some critical angles either the text becomes flat looking, or disappears from view.
I am making a WPF program with the possibility to modify data graphically in 3D. In order to give the user the option to select multiple graphical objects at the same time, I want to implement a selection rectangle. (Just like the one in windows explorer.) A common functionality in programs like this one is to have 2 different functions for the selection rectangle, and that the user can somehow choose which of the methods should be used.
Any object that is partially or completely inside the rectangle is selected.
Only objects that are completely inside the rectangle are selected.
The 2nd method is straight forward by using the bounding box of each object, and check if it is inside the rectangle. The 1st one on the other hand, seems to be quite some work. All my graphical objects are complicated 3D figures, and can be rotated by the user in any way. At the moment I am unable to find any other way than checking if any of the triangles in the mesh of any of the objects cross my 2D rectangle, and that can be quite time consuming.
I have little experience with WPF 3D, but I have done this before in OpenGL. Then I could tell OpenGL to draw a specific area of the screen, and the collect a list of objects that was visible in the specific area. All I needed to get the functionality I wanted was about 5 lines of code.
I guess my question is this:
Is there a way to do this with WPF 3D, similar to the OpenGL approach?
If not, is there any other smart way to find all objects (Visual3D) in a viewport that is partially behind a 2D rectangle?
I refuse to believe I am the only one with this kind of problem, so I hope a clever mind can point me in the right direction.
Regards,
Sverre
Thank you for your answer!
The 2D-rectangle is just in front of the camera and extending infinitely forward. I want to get any object that is partially or completely inside that frustum.
The camera we are using is an orthographic or perspective projection camera (System.Windows.Media.Media3D.ProjectionCamera). The reason we are not using the matrix camera is that we are using a 3rd party tool that does not support the matrix camera. But I am sure there is a way to get the matrix from a projection camera as well, so that is hopefully not the problem.
In theory your solution sounds like just what we need, but I am not sure how to proceed. Do you have any links to sample-code, or can you give some more hints on how to actually implement this?
Btw: Since we are working with WPF, we do not have direct access to DirectX. At least that’s what we have concluded after some research. You mention use of the z-buffer, which we haven’t been able to access through WPF. If you know a way to access the z-buffer, it’s greatly appreciated! This is of-topic, but we have struggled to disable the z-buffer for some time, but have given up…
Best regards,
Sverre
Is your intersection region a 2d rectangle or a frustrum based at a 2d rectangle and extending infinitely forward (or perhaps to some clipping limit)? If it can be construed as a viewing frustrum, then you can leverage the existing capabilities of the graphics system to render the scene using a Camera View and Projection that corresponds to your originating rectangle, with all lighting and shading disabled and colors chosen specifically to 'tag' the different objects in your scene. This means you can use the graphics hardware to perform the clipping/projection as a 'rendering' operation, then simply enumerate the pixel values as 'tags' to determine the objects present in the rectangular view.
If you need to restrict selection to an actual 2d slice (or a very shallow frustrum), you can use the Z-buffer (if you can get access to it) to exclude tagged pixels that are outside the Z range of your desired selection frustrum.
The nice thing about this approach is that you probably already have the Camera matrix (it's the same matrix used for your window for selection) and only need to change the Projection matrix to be a sub-set of the viewing window.
A 'smart' way would be to transform the rectangle into a box using the Camera's matrix
And then do a intersection of all the objects and the box.
I read about DynamicVertexBuffer, and how it's supposed to be better for data that changes often. I have a world built up by cubes, and I need to store the cubes' vertices in this buffer to draw them to the screen.
However, not all cubes have vertices (some are air, which is transparent) and not all faces of the cubes need to be drawn either (they are facing each other), so how do I keep track of what vertices are stored where in the buffer? Also, certain faces need to be drawn last, namely the ones with transparency in them (like glass or leaves), and these faces also need to be drawn in a back-to-front order to not mess up the alpha blending.
If all of these vertices are stored arbitrarily in this buffer, how do I know what vertices are where?
Also, the number of vertices can change, but the DynamicVertexBuffer doesn't seem very dynamic to me, since I can't change it's size at all. Do I have to recreate the buffer every time I need to add or remove faces?
Sounds like you are approaching this in the wrong way - assuming you have anything more than a trivial number of cubes in your world. You should store the world (and it's cubes) in a custom data structure that lets you rapidly determine which cubes (and faces) are visible based on the rules of your world from a given point when looking in a given direction.
Then each time you render a scene generate batches of vertex buffers of just these faces. So don't use vertex buffers as the basis for storing the entire geometry of your world. Vertex buffers are a rendering tool, not a world scene graph tool.
These kind of large scale visibility issues are much faster run in code than by the GPU. For example if you are sat at the origin, looking +x, you can immediately ignore all cubes in the -ve x direction, this is a very simple example.
For a more complete example search on oct-tree rendering. This kind of rendering would match your world layout quite nicely.
Final tip - when I say generate batches of vertex buffers - I mean batch you cubes together in ways that minimize changes in the state of the GPU (e.g. same texture, same shader etc). Minimizing changes to the state of the GPU is key to optimizing the rendering - once you've gone as far as you can with culling faces from the render in the first place.