I have a set of 3D points that (may) form a concave shape. They are already ordered clockwise. The resulting mesh will be (nearly) planar with some slight height adjustments.
What's the best algorithm for me to use in C# (Unity) to triangulate a mesh out of these points?
I would start from Triangle.NET open source project. You may need to inherit your own Vertex type to keep the Z values (triangulation is always performed on XY plane)
Related
I'm working on a mesh generation script in Unity3D.
There are two curves in a 3D space, each curve has more than two nodes. I want to create a mesh of a plane where these two curves are two sides of the plane.
The nodes are used as vertices, and then I'll need to get the value for the mesh.triangles. Each curve may have different number of nodes (vertices), so how shall I group them into triangle int[], so that firstly, all vertices are used in order to best describe the shape, and secondly, no triangle is in overlap with another for better performance?
PS: These two curves will always be almost parallel, and they have no intersection when we see them in xy, xz, or yz planes. So we don't need to think about any complicated/special senario, e.g. picking vertices in different orders from two curves for different triangles.
Please see the attached picture.
Thanks very much.
With the tip from DragonCoder on Unity forum, I figired this out.
His Tip:
"As for a more focused solution: I'd imagine having a virtual train that tries to run down this track. We can say it has flexible wheels so that Its axle would be in an angle that would be the average of what angle is exactly perpendicular on either curve.
Then whenever its "wheels" hit a point, you add those to a list. From that list it should not be too hard to form a set of triangles."
My reply:
"I applied two rules to each vertice, if it's a beginning or an end point, it can only work with one vertice from the same curve to form a triangle, for other vertices, each one shall work with two other vertices from the same curve, unless no eligible vertice from the same curve has been passed by the train. So each time a vertice is passed by the trian, it can only form 1 triangle with existing other vertices train passed, or just wait for the trian to pass more."
DragonCoder also mentioned Apply a Delaunay triangulation (implementations on github), which is a better solution that will work for more senarios, but the math is beyond me.
Now I know mesh simplification has been something people have been studying for years, but I dont really need the sort of simplification you may be thinking. I have been working on a game which makes heavy uses of procedural meshes. One of the algorithms im using has a flaw, a big flaw, It creates a ton of artifact triangles in the mesh. For example if I had a flat square plane, and performed this algorithm on it, it would still be a flat square plane, but thousands to millions of useless triangles got added.
So my question is how can I simplify a mesh by collapsing useless triangles, I dont care to reduce overall polygon count, simply to remove triangles that are all on flat surfaces and are 100% useless.
Heres a example picture of the problem, its a flat side of a cube. This picture should pretty clearly describe the problem. Im using Unity and c#.
I have done a ton of research and I keep hearing about Edge Collapse but I cant find anything specifically on this case. Is Edge collapse the correct method to use? and if so how could I go about implementing it in a situation like this? All existing methods use it to do your usual mesh simplification.
Update:
Heres a short clip showing the Before and After mesh's
I would not suggest edge collapse here. What I would do is :
Compute the normal of each triangle and do a classification in several clusters (lists of triangles) that share same plane (=same normal and same distance to origin) .
For each cluster, make a classification in subclusters of triangles that are connected (or overlapping, because it seems that your image has overlaps)
For each subcluster, compute the boundary. Or the convex hull if you have overlapping triangles.
Optative step : check for 3 or more consecutive vertex in the boundary that are aligned, and remove intermediate vertex.
Triangulate the boundary. This is trivial for convex hull, and easy for non-convex boundary.
Exchange all triangles in the subcluster with the triangles of step 4.
The trivial convex hull triangulation consist in choosing an edge and create triangles with that edge and every else vertex in the hull.
So I am attempting to make complex polygons from a set of randomly generated vertices in 2D. I would like to allow concave polygons to exist, as well as to assure that every vertex in the set is included in the boundary (so the algorithm must be able to handle convex AND concave hulls), and also assure that the lines created by the boundary never intersect. Every version of a concave hull generating algorithm has assumed that it is acceptable to have varying levels of concavity, and that some points may not be a part of the boundary.
I feel like this may be a much simpler problem than it seems to me, but I cant figure out how to make sure I can order the vertices in such a way that drawing a line between vertices having adjacent indices in the list makes a polygon conforming to those standards. For a convex hull it is easy to just find the centroid of the polygon and sort the vertices by their polar angle respective to it, but I am currently unaware of an equivalent idea for concave.
i had to come up with a solution to this problem with a twist.
I had to render a set of randomized points basically to a concave hull.
I used the marching cubes algorithm generating A concave hull!
Basically i just counted and weighed each point reveived, and computed edgeindeces for rendering them depending on my value grid.
https://stemkoski.github.io/Three.js/Marching-Cubes.html
i oriented my self on this implementation.... have fun :)
I'm trying to create tiled terrain in 3D with XNA. I checked tutorials on how to doit(Riemers and Allens). Allens tutorial has an exact result I want to achieve, however I'm not sure about performance - it seems he is using single quadrilateral to draw all terrain and process it with pixel shader, it means - whole terrain will be processed each frame.
Currently I'm drawing a quadrilateral for each tile(Example) - it allows to draw visible tiles only, but it also means that much more verticies need to be processed in each frame and a lot of "DrawIndexedPrimitives" is called.
Am I doing it right or Allens way is faster? Is there a way to do tiled terrain better?
Thanks.
Totally depends on your terrain complexity and size. Typically, you will have terrain tiles with more than one quad/tile (for instance, a tile could consist of 4096 triangles) and then displace the vertices to get the terrain you want. Still, each tile will be a indexed primitive, but a single draw call will result in lots of triangles and a larger part of the terrain. Taking this idea further, you can make the tiles in the distance larger so you don't get too much detail (look for quad-tree/clipmap based terrain approaches; you'll get something like this: http://twitpic.com/89y5kn.)
Alternatively, if you can displace in the vertex shader, you can use instancing to further reduce the amount of draw calls. Per-instance, you pass the UV coordinates into your heighfield and the world-space position and then you again render high-resolution tiles, but now you may wind up with a single draw call for the whole terrain.
For a small game, you might want to generate only a few high-resolution tiles (65k triangles or so) and then frustum-cull them. That gives you a large terrain easily and is still manageable; but this definitely doesn't scale too well :) Depends on your needs.
For the texture tiles, you can also use a low-resolution index texture and do the lookup into an atlas per-pixel or just store the indices in the vertex buffer and interpolate them (this is very common: Store 4 weights per vertex and use it to look up into four different textures.)
I'm trying to build up a 2.5 engine with XNA. Basically, I want to display a 2D sprites (the main hero and other monsters) in a 3D background. The game will be a platform.
Now, using a translation matrix on a sprite doesn't yield the same result of translate a vertex geometry in world space.
I mean, if I apply
Matrix.CreateTranslation(new Vector3(viewportWidth / 2, viewportHeight / 2, 0));
the sprite will be translate at the middle of screen (starting from the display upper left origin). But, if I apply the same transform to a cube in world space, it will translate very far. This doesn't suprising me, but I wonder of to translate a sprite and a 3D object by the same distance, ignoring all the project/unproject coord stuffs.
Thanks!
There are traditionally three matrices: World, View and Project.
BasicEffect, and most other 3D Effects, simply have those matrices. You use Project to define how points are projected from the 3D world onto the 2D viewport ((-1,-1) in the bottom-left of the viewport to (1,1) in the top-right). You set View to move your camera around in world space. And you use World to move your models around in world space.
SpriteBatch is a bit different. It has an implicit Project matrix that causes your world space to match the viewport's client space ((0,0) in the top-left and (width,height) in the bottom-right). You can pass a transformMatrix matrix to Begin which you can generally think of like the View matrix. And then the parameters you pass to Draw (position, rotation, scale, etc) work like the World matrix would.
If you need to do "weird" things to your World or Project matrices in SpriteBatch, you can just build those transforms into your transformMatrix. It may just involve some maths to "undo" the built-in transformations.
In XNA 4 you can also use an Effect (like BasicEffect) directly in SpriteBatch, which you can provide with arbitrary matrices (details).