My challenge is to randomly generate a voxel tree that looks something like these: https://imgur.com/a/LT17der (not my own voxel work in photo)
For now i'm Just looking for ideas on how best to approach creating the trunk. I was thinking I'd start with setting the width and height of the the trunk and add each block at coordinate positions layer by layer with some degree of randomness as to where exactly the blocks are placed.
Any thoughts and suggestions are appreciated - for now i'm looking to keep it simple.
I would try to make a recursive function to generate the voxel tree.
It would take 3 arguments:
A vector representing the base of the current node
A vector representing the extent of the node
A 3-dimensional array of booleans, which represents the cells of the tree which are filled by the algorithm.
A counter, to limit the recursion depth
In Unity3D C#, it would look like:
void fillTree(Vector3 base, Vector3 extent, bool[,,] output, int depth)
And the first call to this recursive function would be
bool[,,] output = new bool[sizeX, sizeY, sizeZ];
fillTree(Vector3.zero, Vector3.up, output, 0)`.
This function will then:
Check that depth is not greater than the maximum, else return
Using a voxelization algorithm, write output with a capsule going from base to base + extent, with a thickness equal to something like 0.5^depth, so that each subtree is twice less thick as its parent (This can be pretty hard, so if performance is not a problem, just iterate naively over the whole grid, filling all voxel whose distance to the line segment [base; base + extent] is less than the thickness. Don't forget to map the float coordinates to the grid coordinates, by adding to each component of the vector the half of the grid size in the same dimension)
Choose a random number of subtrees (>= 2)
For each subtree :
Set a random horizontal angle alpha to be close to the horizontal angle of the parent, which can be retrieved using atan2(extent.z, extent.x) (Beward, an angle is cyclic, so 1 is considered as close to 359)!. The greater depth is, the closer the new angle must be from the parent one (the range must be proportional to something like 0.5^depth, the same as the thickness).
Set a vertical angle beta to be less than the vertical angle of the parent tree, so that branches seem to fall (the vertical angle of the parent can be computed as atan2(extent.y, sqrt(extent.x*extent.x + extent.z * extent.z)))
Compute the extent of the subtree, newExtent with the coordinates (cos(alpha)*cos(beta), sin(beta), sin(alpha)*sin(beta)). Normalize this vector and multiply it with something like 0.5^depth
call fillTree(base + extent, newExtent, output, depth + 1)
I have neither tested this algorithm yet, nor implemented it, and it is for sure far less beautiful as what you are trying to achieve, but I hope it helps you.
Related
The program uses vertices in R^2 as nodes and there are edges between nodes, and ultimately more is built from there. There are a high number of circuitous ways that a point (x,y) in R^2 might be reached that may rely on layer after layer of trig functions. So it makes sense to define one vertex as the canonical vertex for all points in a square with side length 2*epsilon centered at the point. So various calculations happen, out comes a point (x,y) where I wish to put a vertex, a vertex factory checks to see if there is already vertex deemed as canonical that should be used for this point, if so it returns that vertex, if not a new vertex is created with the coordinates of the point and that vertex is now deemed canonical. I know this can lead to ambiguities given the possibility for overlap of the squares but that is immaterial for the moment, epsilon can be set to make the probability of such a case vanishingly small.
Clearly a list of canonical vertices must be kept.
I have working implementations using List<Vertex> and HashSet<Vertex>, however the vertex creation process seems to scale poorly when the number of vertices grows to over over 100,000 and incredibly poorly if the number gets anywhere near 1,000,000.
I have no idea how to efficiently implement the VertexFactory. Right now Vertex has method IsEquivalentTo(Vertex v) and returns true if v is contained in the square around the instance vertex calling it, false otherwise.
So the the vertex creation process looks like this:
Some point (x,y) get calculated and requests a new vertex from the vertex manager.
VertexManager creates a temporary vertex temp then uses a foreach to iterate over every vertex in the container using IsEquivalentTo(temp) to find a match and return it, if no match is found then add temp to the container and return it. I should state that if a match is found obviously it breaks out of the foreach loop.
I may be way off but my first guess would be put an order on the vertices such as
v1 < v2 iff ( (v1.X < v2.X) || ( (v1.X == v2.X) && (v1.Y < v2.Y) ) )
and then store the vertices in a sorted container. But to be honest I do not know enough about about the various containers to know which is the most appropriate for the purpose, standard C# best practices, etc
Edit:
I cannot mark the comment as an answer so thanks to GSerg whose comment guided me to kd-trees. This is exactly what I was looking for.
Thanks
From a real time signal adquisition, I'm getting 8400 points and I need to graph them.
My problem is that there is a lot of noise in the data, Is there an alghorythm that reduce the noise?
I need to know how many "plateaus" are there?
to something like:
figures
You can probably isolate the plateaus by means of a sliding window in which you compute the range (maximum value minus minimum value). Observe the resulting signal and see what threshold will discriminate.
Below is what you obtain by an horizontal morphological erosion, followed by counting the white pixels vertically. The slopes between the plateaus are very distinctive.
After segmenting the cloud, fitting the plateaus is easy.
I would:
compute BBOX or OBB of PCL
in case your PCL can have any orientation use OBB or simply find 2 most distant points in PCL and use that as major direction.
sort the PCL's BBOX major by axis (biggest side of BBOX or OBB)
In case your data has always the same orientation you can skip #1, for non axis aligned orientation just sot by
dot(pnt[i]-p0,p1-p0)
where p0,p1 are endpoints of major side of OBB or most distant points in PCL and pnt[i] are the points from your PCL.
use sliding average to filter out noise
so just a "curve" remains and not that zig-zag pattern your filtered image shows.
threshold slope change
let call detected changes + (increasing slope) and - (decreasing slope) so you just remember position (index in sorted PCL) of each and then detect these patterns:
UP (positive peak): + - (here is your UP) -
DOWN (negative peak): - + (here is your DOWN) +
to obtain the slope you can simply use atan2 ...
I am trying to write an algorithm (in c#) that will stitch two or more unrelated heightmaps together so there is no visible seam between the maps. Basically I want to mimic the functionality found on this page :
http://www.bundysoft.com/wiki/doku.php?id=tutorials:l3dt:stitching_heightmaps
(You can just look at the pictures to get the gist of what I'm talking about)
I also want to be able to take a single heightmap and alter it so it can be tiled, in order to create an endless world (All of this is for use in Unity3d). However, if I can stitch multiple heightmaps together, I should be able to easily modify the algorithm to act on a single heightmap, so I am not worried about this part.
Any kind of guidance would be appreciated, as I have searched and searched for a solution without success. Just a simple nudge in the right direction would be greatly appreciated! I understand that many image manipulation techniques can be applied to heightmaps, but have been unable to find a image processing algorithm that produces the results I'm looking for. For instance, image stitching appears to only work for images that have overlapping fields of view, which is not the case with unrelated heightmaps.
Would utilizing a FFT low pass filter in some way work, or would that only be useful in generating a single tileable heightmap?
Because the algorithm is to be used in Unit3d, any c# code will have to be confined to .Net 3.5, as I believe that's the latest version Unity uses.
Thanks for any help!
Okay, seems I was on the right track with my previous attempts at solving this problem. My initial attemp at stitching the heightmaps together involved the following steps for each point on the heightmap:
1) Find the average between a point on the heightmap and its opposite point. The opposite point is simply the first point reflected across either the x axis (if stitching horizontal edges) or the z axis (for the vertical edges).
2) Find the new height for the point using the following formula:
newHeight = oldHeight + (average - oldHeight)*((maxDistance-distance)/maxDistance);
Where distance is the distance from the point on the heightmap to the nearest horizontal or vertical edge (depending on which edge you want to stitch). Any point with a distance less than maxDistance (which is an adjustable value that effects how much of the terrain is altered) is adjusted based on this formula.
That was the old formula, and while it produced really nice results for most of the terrain, it was creating noticeable lines in the areas between the region of altered heightmap points and the region of unaltered heightmap points. I realized almost immediately that this was occurring because the slope of the altered regions was too steep in comparison to the unaltered regions, thus creating a noticeable contrast between the two. Unfortunately, I went about solving this issue the wrong way, looking for solutions on how to blur or smooth the contrasting regions together to remove the line.
After very little success with smoothing techniques, I decided to try and reduce the slope of the altered region, in the hope that it would better blend with the slope of the unaltered region. I am happy to report that this has improved my stitching algorithm greatly, removing 99% of the lines reported above.
The main culprit from the old formula was this part:
(maxDistance-distance)/maxDistance
which was producing a value between 0 and 1 linearly based on the distance of the point to the nearest edge. As the distance between the heightmap points and the edge increased, the heightmap points would utilize less and less of the average (as defined above), and shift more and more towards their original values. This linear interpolation was the cause of the too step slope, but luckily I found a built in method in the Mathf class of Unity's API that allows for quadratic (I believe cubic) interpolation. This is the SmoothStep Method.
Using this method (I believe a similar method can be found in the Xna framework found here), the change in how much of the average is used in determining a heightmap value becomes very severe in middle distances, but that severity lessens exponentially the closer the distance gets to maxDistance, creating a less severe slope that better blends with the slope of the unaltered region. The new forumla looks something like this:
//Using Mathf - Unity only?
float weight = Mathf.SmoothStep(1f, 0f, distance/maxDistance);
//Using XNA
float weight = MathHelper.SmoothStep(1f, 0f, distance/maxDistance);
//If you can't use either of the two methods above
float input = distance/maxDistance;
float weight = 1f + (-1f)*(3f*(float)Math.Pow(input, 2f) - 2f*(float)Math.Pow(input, 3f));
//Then calculate the new height using this weight
newHeight = oldHeight + (average - oldHeight)*weight;
There may be even better interpolation methods that produce better stitching. I will certainly update this question if I find such a method, so anyone else looking to do heightmap stitching can find the information they need. Kudos to rincewound for being on the right track with linear interpolation!
What is done in the images you posted looks a lot like simple linear interpolation to me.
So basically: You take two images (Left, Right) and define a stitching region. For linear interpolation you could take the leftmost pixel of the left image (in the stitching region) and the rightmost pixel of the right image (also in the stitching region). Then you fill the space in between with interpolated values.
Take this example - I'm using a single line here to show the idea:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
Lets say our overlap is 4 pixels wide:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
^ ^ ^ ^ overlap/stitiching region.
The leftmost value of the left image would be 10
The rightmost value of the right image would be 1.
Now we interpolate linearly between 10 and 1 in 2 steps, our new stitching region looks as follows
stitch = [10, 07, 04, 01]
We end up with the following stitched line:
line = [11,11,11,10,07,04,01,02,02,02]
If you apply this to two complete images you should get a result similar to what you posted before.
I am trying to minimize the difference between sets of square markers in 3d space with a set of unknown parameters.
I have a model set of these square markers (represented by 3d position and rotation) which should at the end of optimization match up with a set of observed square markers.
I am using Levenberg–Marquardt to optimize the set of unknown parameters, these parameters will alter the position and rotation of the model 3d markers until they match (more or less) with the observed 3d marker positions.
The observed 3d markers come from a computer vision marker detection algorithm. It gives the id of the markers seen in each frame and the transformation from the camera of each marker (using Coplanar posit). Each 'frame' would only be able to see a small number of markers in the total set of markers, there will also be inaccuracies in the transformation.
I have thought of how to construct my minimization function and I thought to try to compare the relative rotations and minimize the difference between the rotations in each iteration of the LM optimisation.
Essentially:
foreach (Marker m1 in markers)
{
foreach (Marker m2 in markers)
{
Vector3 eulerRotation = getRotation(m1, m2);
ObservedMarker observed1 = getMatchingObserved(m1);
ObservedMarker observed2 = getMatchingObserved(m2);
Vector3 eulerRotationObserved = getRotation(observed1, observed2);
double diffX = Math.Abs(eulerRotation.X - eulerRotationObserved.X);
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
}
}
Where diffX, diffY and diffZ are the values to be minimized.
I am using the following to calculate the angles:
Vector3 axis = Vector3.Cross(getNormal(m1), getNormal(m2));
axis.Normalize();
double angle = Math.Acos(Vector3.Dot(getNormal(m1), getNormal(m2)));
Vector3 modelRotation = calculateEulerAngle(axis, angle);
getNormal(Marker m) calculates the normal to the plane that the square marker lies on.
I am sure I am doing something wrong here though. Throwing this all into the LM optimiser (I am using ALGLib) doesn't seem to do anything, it goes through 1 iteration and finishes without changing any of the unknown parameters (initially all 0).
I am thinking that something is wrong with the function I am trying to minimize over. It seems sometimes the angle calculated (3rd line) returns NaN (I am currently setting this case to return diffX, diffY, diffZ as 0). Is it even valid to compare the euler angles as above?
Any help would be greatly appreciated.
Further information:
Program is written in C#, I am using XNA as well.
The model markers are represented by its four corners in 3D coords
All the model markers are in the same coordinate space.
Observed markers are the four corners as translations from the camera position in camera coordinate space
If m1 and m2 markers are the same marker id or if either m1 or m2 is not observed, I set all the diffs to 0 (no difference).
At first I thought this might be a typo, but then I realized that this could be a bug, having been a victim of similar cases myself in the past.
Shouldn't diffY and diffZ be:
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
I don't have enough reputation to post this as a comment, hence posting it as an answer!
Any luck with this? Is it correct to assume that you want to minimize the "sum" of all diffs over all marker combinations? I think if you want to use LM you should not use Math.Abs.
One alternative would be to formulate your objective function manually and use another optimizer. I have recently ported two non-linear optimizers to C# which do not even require you to compute derivatives:
COBYLA2, supports non-linear constraints but require more iterations.
BOBYQA, limited to variable bounds constraints, but provides a considerable more efficient iteration scheme.
I have a grid of 3D terrain, where each of the coordinate (x,y,z) of each grid are known. Now, I have a monotonously increasing/ decreasing line, which its start point is also known. I want to find the point where the terrain and the line meets. What is the algorithm to do it?
What I can think of is to store the coordinate of the 3D terrain in a nxn matrix. And then I would segmentize the line based on the grid in the terrain. I would then start with the grid that is the nearest to the line, and then try to compute whether that plane intersects with the line, if yes, then get the coordinate and exit. If no, then I would proceed to the next segment.
But is my algorithm the best, or the most optimum solution? Or is there any existing libraries that already do this?
A different approach would be to triangulate the terrain grid to produce a set of facets and then intersect the line with those.
Obviously you'd need to do some optimisations like only checking those facets that intersect the bounding box of the line. You can do a quite cheap/quick facet bounding box to line bounding box check which will discount most of the triangles in the terrain very quickly.
If you arrange your triangles in to an octree (as #sum1stolemyname suggested but for the points) then this checking can be done from the "top down" and you should be able to discount whole sections of the the terrain with a single calculation.
Not directly and optimisation, just a few hints:
If your grid is large, it might be worthwhile to build an octree from your terrain in order to quickly reduce the number of grid nodes you have to check your line against. This can be more efficient in a huge grid( like 512*512 ndoes) since only the leafnodes your ray is passing through have to be considered.
Additionally, the Octree can be used as a means to decide wich parts of your grid are visible and therefore have to be drawn, by checking which leave-nodes are in the viewing frustum.
There is a catch, though: building the Octree has to be done in advance, taking some time, and the tree is static. It can not be easyly modified after it has been constructes, since a modification in one node might affect several other nodes, not necessarily adjacent ones.
However, if you do not plan to modify your grid once it is created an octree will be helpful.
UPDATE
Now that i understand how you are planning to store your grid, i believe space partitioning will be an efficent way to find the nearest neighbour of the intersection line.
Finding the nearest Neighbour linearly has a runtime complexity of O(N), while space-partitioning appoaches have an average runtime complexity if O(log N).
If the terrain is not built via a nice function you will have to do a ray trace, i.e. traverse the line step by step in order to find an intersection. This procedure can take some time.
There are several parameters for the procedure. E.g. there is the offset you walk alogn the line in each step. If you take an offset too large, you might leave out some "heights" of your terrain and thus not get the correct intersection. If the offset is to small, it will slow down your procedure.
However, there is a nice trick to save time. It's described here resp. here. It uses some kind of optimization structure for the terrain, i.e. it builds several levels of details the following way: The finest level of detail is just the terrain itself. The next (coarser) level of detail contains just a forth of the number of original "pixels" in the terrain texture and combines 4 pixels into one, taking the maximum. The next level of detail is constructed analoguesly:
. . . .
... . ... .. .
....... .... .. .
........ => .... => .. => .
01234567 0246 04 0
1357 26 4
fine => => => => => coarse
If now the ray cast is performed, first of all, the coarser levels of detail are checked:
/
/
/.
.
.
.
If the ray already misses the coarse level of detail, no finer level has to be examined. That's just a very rough idea how the optimisation works. But it works quite well. Implementing it is quite a bunch of work, but the paper is a good help.