Using DynamicVertexBuffer in XNA 4.0 - c#

I read about DynamicVertexBuffer, and how it's supposed to be better for data that changes often. I have a world built up by cubes, and I need to store the cubes' vertices in this buffer to draw them to the screen.
However, not all cubes have vertices (some are air, which is transparent) and not all faces of the cubes need to be drawn either (they are facing each other), so how do I keep track of what vertices are stored where in the buffer? Also, certain faces need to be drawn last, namely the ones with transparency in them (like glass or leaves), and these faces also need to be drawn in a back-to-front order to not mess up the alpha blending.
If all of these vertices are stored arbitrarily in this buffer, how do I know what vertices are where?
Also, the number of vertices can change, but the DynamicVertexBuffer doesn't seem very dynamic to me, since I can't change it's size at all. Do I have to recreate the buffer every time I need to add or remove faces?

Sounds like you are approaching this in the wrong way - assuming you have anything more than a trivial number of cubes in your world. You should store the world (and it's cubes) in a custom data structure that lets you rapidly determine which cubes (and faces) are visible based on the rules of your world from a given point when looking in a given direction.
Then each time you render a scene generate batches of vertex buffers of just these faces. So don't use vertex buffers as the basis for storing the entire geometry of your world. Vertex buffers are a rendering tool, not a world scene graph tool.
These kind of large scale visibility issues are much faster run in code than by the GPU. For example if you are sat at the origin, looking +x, you can immediately ignore all cubes in the -ve x direction, this is a very simple example.
For a more complete example search on oct-tree rendering. This kind of rendering would match your world layout quite nicely.
Final tip - when I say generate batches of vertex buffers - I mean batch you cubes together in ways that minimize changes in the state of the GPU (e.g. same texture, same shader etc). Minimizing changes to the state of the GPU is key to optimizing the rendering - once you've gone as far as you can with culling faces from the render in the first place.

Related

How to fill out a procedurally generated mesh with quads of varying sizes?

I'm generating meshes based of color coded areas on an image.
In the following picture, the semi-transparent image is a representation of what a color coded section might look like, with the black dots being natural 2d vertex positions.
Representation
The way I currently create a mesh is by iterating through a nested x,y for loop and making a 1x1 quad.
However, I would like to generate the meshes in a way in which I could specify the desired quad size, perhaps something like the following image. (Numbers are the order of generation)
Proposed generation with desired 3x3 quads
The generation doesn't have to follow this schema exactly, or even be consisted of quads. The only important thing is that I'm able to specify the desired size of the generated triangles, in order to make meshes of varying detail to use in a LOD system.
Would you happen to know what kind of mathematical area I should look into in order to figure out how to write this logic, or better yet, of an algorithm or a library that can do the aforementioned?
I, of course, plan to texture the generated mesh, so I wanted to additionally ask if the UVs are going to be messed up with this kind of generation, and if fixing those UVs at runtime is going to be problematic and cpu intensive.
This doesn't seem incredibly difficult, and there is almost certainly branches of mathematics that deal with this kind of problem, but I don't think it's necessary to go down that route (edit: see addendum below).
I would treat it as a recursion problem, where you start with 2^n x 2^m sized rectangle, and then further analyze each of those blocks as four blocks of size 2^(n-1) x 2^(m-1). And just progressively go from there until you reach a block of 2x2 or 1x1, or whatever size you think makes sense based on the starting size.
Essentially, would start at, e.g. 512x256 and split into two 256x256, and then split both of those into four 128x128 blocks. If a block is completely filled (positive), then add that block as a quad to this list, otherwise break unfilled blocks into four smaller blocks that are 64x64. Continue to either add quads or further break apart until you reach some minimum size that makes sense for the level of detail that's needed.
Here's the concept quickly sketched out in MS Paint:
Addendum
If you want an algorithm--not sure why I didn't think of this before--there's a two-dimensional algorithm called Marching Squares, which is the lower dimensional version of Marching Cubes, the algorithm most commonly associated with Voxels.

2D Procedural Generation

I'm currently trying to implement procedurally generated tilemaps for an overworld. I'm as far as generating noise and building the tilemaps using different ranges within the generated noise. However, they're very crude as each different 'biome' is just one tile. What is the best way to implement the detailing and edges of each biome? e.g flowers/trees/etc spread out in a forest biome, beach-to-water transition tiles between beach and ocean biome, etc.
For some context, I built an engine in MonoGame with a friend of mine. We implemented a chunk loading system for dynamic generation and infinite scrolling with no load screens(not exactly state of the art, I know, but I was proud of it). Each chunk is 50x50 tiles, and there are 9 chunks loaded at any given time. When the player moves chunks in any direction, the chunks on the opposite corner are unloaded and new ones are loaded in the direction the player is walking. Since the player is starting on the opposite of where chunks are being generated, it hasn't been a problem thus far as the maps are big enough that they're done generating by the time the player gets to them. I'm not sure whether the current method I have in mind is going to change this or not.
Anyway, I'm thinking that I need to determine within each biome a specific set of tiles and generate noise for each biome instance to determine placement of 'detail' tiles specific to that biome. For the edges, I'd just loop through each adjacent tile to determine whether it's a different biome. If so, use the transition tile. However, the whole idea seems very inefficient as that's a ton of noise to generate as well as looping through 7500 tiles every time the player moves chunks. I've been trying to think of a better method, but this is my first foray into procedural generation and I haven't been able to find much online that talks about anything more in-depth than generating noise and using that noise to generate chunks of land. Is there a more efficient method I should use or is my next step going to be optimizing? I can't imagine my method is going to be very efficient or practical due to how much I'm going to be looping through every tile. Anyone have any ideas or suggestions? Thanks in advance.
One idea:
Use noise to generate height, temperature, rain, ... for each chunk
Use generated values to determine the biome for each chunk
For each tile interpolate the generated of the surrounding chunks
Use the interpolated values to select ground textures, plants, ...
That should generate smooth borders between biomes and also different chunks with the same biome can have a different feel.

Floor draw method in isometric engine

Im working on isometric 2D tile engin for RTS game. I have two ways how can I draw floor. One option is one big image (for example 8000px x 8000px have about 10MB) and second option is draw images tile by tile only in visibly area.
My questin is what is better (for performance)?
Performance-wise and memory-wise, a tiled approach is better.
Memory-wise: If you can use a single spritesheet to hold the textures of every tile you need to render, then the amount of memory used would decrease tremendously - as opposed to redefining textures for tiles you want to render more than once. Also, on every texture there is an attribute called "pitch". This attribute tells us how much more memory is being used than the image actually needs. What? Why would my program be doing this? Back in the good old days, when Ben Kenobi was still called Obi Wan Kenobi, textures took up the memory they were supposed to. But now, with hardware acceleration, the GPU adds some padding to your texture to make it align with boundaries that it can process faster. This is memory you can reduce with the use of a spritesheet.
From a performance standpoint: Whenever you draw a regular sprite to the screen, the graphics hardware requires three main pieces of information: 1) The texture you want to render from. 2) What part of that texture you want to render from. 3) Where on the screen you want to render to. Repeat for every object you want to render. With a spritesheet, it only passes data once - a big performance increase because passing data from the CPU to the GPU (and vice-versa) is really slow.
And I disagree with the two comments, actually. Making a change of this caliber would be difficult when your program is mature.

3d Tiled terrain

I'm trying to create tiled terrain in 3D with XNA. I checked tutorials on how to doit(Riemers and Allens). Allens tutorial has an exact result I want to achieve, however I'm not sure about performance - it seems he is using single quadrilateral to draw all terrain and process it with pixel shader, it means - whole terrain will be processed each frame.
Currently I'm drawing a quadrilateral for each tile(Example) - it allows to draw visible tiles only, but it also means that much more verticies need to be processed in each frame and a lot of "DrawIndexedPrimitives" is called.
Am I doing it right or Allens way is faster? Is there a way to do tiled terrain better?
Thanks.
Totally depends on your terrain complexity and size. Typically, you will have terrain tiles with more than one quad/tile (for instance, a tile could consist of 4096 triangles) and then displace the vertices to get the terrain you want. Still, each tile will be a indexed primitive, but a single draw call will result in lots of triangles and a larger part of the terrain. Taking this idea further, you can make the tiles in the distance larger so you don't get too much detail (look for quad-tree/clipmap based terrain approaches; you'll get something like this: http://twitpic.com/89y5kn.)
Alternatively, if you can displace in the vertex shader, you can use instancing to further reduce the amount of draw calls. Per-instance, you pass the UV coordinates into your heighfield and the world-space position and then you again render high-resolution tiles, but now you may wind up with a single draw call for the whole terrain.
For a small game, you might want to generate only a few high-resolution tiles (65k triangles or so) and then frustum-cull them. That gives you a large terrain easily and is still manageable; but this definitely doesn't scale too well :) Depends on your needs.
For the texture tiles, you can also use a low-resolution index texture and do the lookup into an atlas per-pixel or just store the indices in the vertex buffer and interpolate them (this is very common: Store 4 weights per vertex and use it to look up into four different textures.)

.NET high level graphics library

I am programming various simulation tools in C#/.NET
What I am looking for is a high level visualization library; create a scene, with a camera with some standard controls, and render a few hunderd thousand spheres to it, or some wireframes. That kind of thing. If it takes more than one line to initialize a context, it deviates from my ideal.
Ive looked at slimDX, but its way lower level than im looking for (at least the documented parts, but I dont really care for any other). WPF perspective looked cool, but it seems targeted at static XAML defined scenes, and that doesnt really suit me either.
Basically, im looking for the kind of features languages like blitzbasic used to provide. Does that exist at all?
I'm also interested in this (as I'm also developing simulation tools) and ended up hacking together some stuff in XNA. It's definitely a lot more work than you've described, however. Note that anything you can do in WPF via XAML can also be done via code, as XAML is merely a representation of an object hierarchy and its relationships. I think that may be your best bet, though I don't have any metrics on what kind of performance you could expect with a few hundred thousand spheres (you're absolutely going to need some culling in that case and the culling itself may be expensive if you don't use optimizations like grid partitioning.)
EDIT: If you really need to support 100K entities and they can all be rendered as spheres, I would recommend that you bypass the 3d engine entirely and only use XNA for math. I would imagine an approach like the following:
Use XNA to set up Camera (View) and Perspective matrices. It has some handy Matrix static functions that make this easy.
Compute the Projection matrix and project all of your 'sphere' origin points to the viewing frustrum. This will give you X,Y screen coordinates and Z depth in the frustrum. You can either express this as 100K individual matrix multiplications or multiplication of the Projection matrix by a single 3 x 100K element matrix. In the former case, this is a great candidate for parallelism using the new .NET 4 Parallel functionality.
If you find that the 100K matrix multplications are a problem, you can reduce this significantly by performing culling of points before transformation if you know that only a small subset of them will be visible at a given time. For instance, you can invert the Projection matrix to find the bounds of your frustrum in your original space and create an axis-aligned bounding box for the frustrum. You can then exclude all points outside this box (simple comparison tests in X, Y and Z.) You only need to recompute this bounding box when the Projection matrix changes, so if it changes infrequently, this can be a reasonable optimization.
Once you have your transformed points, clip any outside the frustum (Z < 0, Z > maxDist, X<0, Y<0, X>width, Y>height). You can now render each point by drawing a filled circle, with its radius proportional to Z (Z=0 would have largest radius and Z=maxDist would probably fade to a single point.) If you want to provide a sense of shading/depth, you can render with a shaded brush to very loosely emulate lighting on spheres. This works because everything in your scene is a sphere and you're presumably not worried about things like shadows. All of this would be fairly easy to do in WPF (including the Shaded Brush), but be sure to use DrawingVisual classes and not framework elements. Also, you'll need to make sure you draw in the correct Z order, so it helps if you store the transformed points in a data structure that sorts as you add.
If you're still having performance problems, there are further optimizations you can pursue. For instance, if you know that only a subset of your points are moving, you can cache the transformed locations for the immobile points. It really depends on the nature of your data set and how it evolves.
Since your data set is so large, you might consider changing the way you visualize it. Instead of rendering 100K points, partition your working space into a volumetric grid and record the number (density) of points inside each grid cube. You can Project only the center of the grid and render it as a 'sphere' with some additional feedback (like color, opacity or brush texture) to indicate the point density. You can combine this technique with the traditional rendering approach, by rendering near points as 'spheres' and far points as 'cluster' objects with some brush patterning to match the density. One simple algorithm is to consider a bounding sphere around the camera; all points inside the sphere will be transformed normally; beyond the sphere, you will only render using the density grid.
Maybe the XNA Game studio is what you are looking for.
Also take a look at DirectX.
WPF perspective looked cool, but it seems targeted at static XAML defined scenes
Look again, WPF can be as dynamic as you will ever need.
You can write any WPF program, including 3D, totally without XAML.
Do you have to use C#/.Net or would MonoDevelop be good enough? I can recomend http://unity3d.com/ if you want a powerful 3D-engine.

Categories

Resources