How can you stitch multiple heightmaps together to remove seams? - c#

I am trying to write an algorithm (in c#) that will stitch two or more unrelated heightmaps together so there is no visible seam between the maps. Basically I want to mimic the functionality found on this page :
http://www.bundysoft.com/wiki/doku.php?id=tutorials:l3dt:stitching_heightmaps
(You can just look at the pictures to get the gist of what I'm talking about)
I also want to be able to take a single heightmap and alter it so it can be tiled, in order to create an endless world (All of this is for use in Unity3d). However, if I can stitch multiple heightmaps together, I should be able to easily modify the algorithm to act on a single heightmap, so I am not worried about this part.
Any kind of guidance would be appreciated, as I have searched and searched for a solution without success. Just a simple nudge in the right direction would be greatly appreciated! I understand that many image manipulation techniques can be applied to heightmaps, but have been unable to find a image processing algorithm that produces the results I'm looking for. For instance, image stitching appears to only work for images that have overlapping fields of view, which is not the case with unrelated heightmaps.
Would utilizing a FFT low pass filter in some way work, or would that only be useful in generating a single tileable heightmap?
Because the algorithm is to be used in Unit3d, any c# code will have to be confined to .Net 3.5, as I believe that's the latest version Unity uses.
Thanks for any help!

Okay, seems I was on the right track with my previous attempts at solving this problem. My initial attemp at stitching the heightmaps together involved the following steps for each point on the heightmap:
1) Find the average between a point on the heightmap and its opposite point. The opposite point is simply the first point reflected across either the x axis (if stitching horizontal edges) or the z axis (for the vertical edges).
2) Find the new height for the point using the following formula:
newHeight = oldHeight + (average - oldHeight)*((maxDistance-distance)/maxDistance);
Where distance is the distance from the point on the heightmap to the nearest horizontal or vertical edge (depending on which edge you want to stitch). Any point with a distance less than maxDistance (which is an adjustable value that effects how much of the terrain is altered) is adjusted based on this formula.
That was the old formula, and while it produced really nice results for most of the terrain, it was creating noticeable lines in the areas between the region of altered heightmap points and the region of unaltered heightmap points. I realized almost immediately that this was occurring because the slope of the altered regions was too steep in comparison to the unaltered regions, thus creating a noticeable contrast between the two. Unfortunately, I went about solving this issue the wrong way, looking for solutions on how to blur or smooth the contrasting regions together to remove the line.
After very little success with smoothing techniques, I decided to try and reduce the slope of the altered region, in the hope that it would better blend with the slope of the unaltered region. I am happy to report that this has improved my stitching algorithm greatly, removing 99% of the lines reported above.
The main culprit from the old formula was this part:
(maxDistance-distance)/maxDistance
which was producing a value between 0 and 1 linearly based on the distance of the point to the nearest edge. As the distance between the heightmap points and the edge increased, the heightmap points would utilize less and less of the average (as defined above), and shift more and more towards their original values. This linear interpolation was the cause of the too step slope, but luckily I found a built in method in the Mathf class of Unity's API that allows for quadratic (I believe cubic) interpolation. This is the SmoothStep Method.
Using this method (I believe a similar method can be found in the Xna framework found here), the change in how much of the average is used in determining a heightmap value becomes very severe in middle distances, but that severity lessens exponentially the closer the distance gets to maxDistance, creating a less severe slope that better blends with the slope of the unaltered region. The new forumla looks something like this:
//Using Mathf - Unity only?
float weight = Mathf.SmoothStep(1f, 0f, distance/maxDistance);
//Using XNA
float weight = MathHelper.SmoothStep(1f, 0f, distance/maxDistance);
//If you can't use either of the two methods above
float input = distance/maxDistance;
float weight = 1f + (-1f)*(3f*(float)Math.Pow(input, 2f) - 2f*(float)Math.Pow(input, 3f));
//Then calculate the new height using this weight
newHeight = oldHeight + (average - oldHeight)*weight;
There may be even better interpolation methods that produce better stitching. I will certainly update this question if I find such a method, so anyone else looking to do heightmap stitching can find the information they need. Kudos to rincewound for being on the right track with linear interpolation!

What is done in the images you posted looks a lot like simple linear interpolation to me.
So basically: You take two images (Left, Right) and define a stitching region. For linear interpolation you could take the leftmost pixel of the left image (in the stitching region) and the rightmost pixel of the right image (also in the stitching region). Then you fill the space in between with interpolated values.
Take this example - I'm using a single line here to show the idea:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
Lets say our overlap is 4 pixels wide:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
^ ^ ^ ^ overlap/stitiching region.
The leftmost value of the left image would be 10
The rightmost value of the right image would be 1.
Now we interpolate linearly between 10 and 1 in 2 steps, our new stitching region looks as follows
stitch = [10, 07, 04, 01]
We end up with the following stitched line:
line = [11,11,11,10,07,04,01,02,02,02]
If you apply this to two complete images you should get a result similar to what you posted before.

Related

How to detect how many ups and downs are in a point cloud

From a real time signal adquisition, I'm getting 8400 points and I need to graph them.
My problem is that there is a lot of noise in the data, Is there an alghorythm that reduce the noise?
I need to know how many "plateaus" are there?
to something like:
figures
You can probably isolate the plateaus by means of a sliding window in which you compute the range (maximum value minus minimum value). Observe the resulting signal and see what threshold will discriminate.
Below is what you obtain by an horizontal morphological erosion, followed by counting the white pixels vertically. The slopes between the plateaus are very distinctive.
After segmenting the cloud, fitting the plateaus is easy.
I would:
compute BBOX or OBB of PCL
in case your PCL can have any orientation use OBB or simply find 2 most distant points in PCL and use that as major direction.
sort the PCL's BBOX major by axis (biggest side of BBOX or OBB)
In case your data has always the same orientation you can skip #1, for non axis aligned orientation just sot by
dot(pnt[i]-p0,p1-p0)
where p0,p1 are endpoints of major side of OBB or most distant points in PCL and pnt[i] are the points from your PCL.
use sliding average to filter out noise
so just a "curve" remains and not that zig-zag pattern your filtered image shows.
threshold slope change
let call detected changes + (increasing slope) and - (decreasing slope) so you just remember position (index in sorted PCL) of each and then detect these patterns:
UP (positive peak): + - (here is your UP) -
DOWN (negative peak): - + (here is your DOWN) +
to obtain the slope you can simply use atan2 ...

Translate Unity units of measurement?

In Unity one can use Raycasting to calculate various measurements. Examples such as diameter, thickness of a wall, and width. One way to do this is by capturing a users mouse click on an object and using RaycastHits to capture the location of the mouse click on the object and than casting additional rays depending on the measurement desired.
Seen below:
Thickness of the walls clicked is .0098, .0096, and .0072. Width is .0615, .0611, and .060. Diameter is .0475.
Though these measurements are (believed to be) executed and calculated correctly it's unclear how the results translate to real world units of measurement.
This is best demonstrated and shown in the fourth image. Checking the same diameter in other CAD programs, such as NX, the diameter is 0.4210" or inches. Thickness and width were calculated as well at .075244" and .252872" respectively.
So than, how do the results in Unity, (results produced using Vector3.Distance to calculate the distance between two points) translate to real world units of measurement?
Googling the subject yields a common answer: Unity's measurements are "game units" and can be used however desired. While I grasp this, I don't understand how to accomplish the translation of "game units", or whatever Unity's units of measurement truly are, to the measurement results I can see in CAD programs.
Results (CAD x Unity):
Thickness: .075244" x .0098, .0096, and .0072.
Width: .252872" x .0615, .0611, and .060.
Diameter: 0.4210" x .0475
(note1: model scales are identical in Unity and external CAD program.)
(note2: the slight variation in thickness and width results from Unity measurements coming at angles where the CAD program is measuring distance between the two planes, i.e. .009x and .06x.)
(note3: ignore the incorrect labeling of Width in the second visual as 'Thickness' and the inch labeling in all of the Unity visuals, ", as both incorrect).
1 Unity unit is generally held to be 1 meter, however as you've read it's up to your implementation, in this case it looks like you're actually exporting from CAD with 1 inch = 1 unit, since your results seem similar but slightly off.
The reason you're getting innaccuracies is most likely due to Unity's collision system not being extremely accurate, most colliders are in fact slightly larger than the mesh they represent which will throw off your fine tuned measurements significantly, and on top of that Unity will have much lower precision than CAD, since Unity is a game engine and needs to perform in realtime, 3D position data is not very accurate (it gets pretty hazy around 4 digits of precision), and in fact gets significantly worse as you travel away from the origin.
I wouldn't recommend trying to use Unity for any kind of precise design work, especially when representing the real world, but if you're dead set, you might want to scale your objects up by a factor of 10 or 100 in order to keep your digits closer to the decimal point to reduce floating point error, this is a hack obviously.
You may want to also look at your physics settings: https://docs.unity3d.com/Manual/class-PhysicsManager.html
In particular "Default Contact Offset" may be relevant (although I'm not sure if it affects raycasts)
PS: I'd post this as a comment but the rep system won't let me, your description of the measurements between each environment is really confusing, next time maybe try and format it in a table or something?

Generate Voronoi diagram without using Fortune's algorithm

I'm hoping to create a Voronoi landscape in Unity in C#. I looked at a number of Unity Project files, but they all implement Fortune's algorithm, which is completely over my head. Are there any other methods of generating Voronoi diagram (that is easier to understand)?
Slow performance is completely fine with me.
Much appreciated!
Sidenote: Since I'm working in Unity and need to generate 2D/3D mesh from Voronoi diagram, per-pixel distance check won't work :,(
On second thought, maybe I could use a 2D array of Vector2s instead of pixels, that are 1.0 unit spaced apart in x and z axis.
There is a very simple way to create an approximated Voronoi diagram VD. For every Site s that should define a cell in the VD (2D-plane) you center a cone at s with constant slope and a certain height. Then you look from above onto that landscape of cones (where all the spikes are visible). The boundary where the different cones meet (projected to the 2D-plane) is the (approximated) Voronoi diagram.
(Image Source)
As you requested in the comments, to get the actual edge data seems not so easy. But there could be some graphical routines to generate them by intersecting the cones.
An alternative is to compute a Delaunay triangulation of the given point set. There are some implementation referenced in this related post (also simple approximations are mentioned). Then you compute the dual graph of your triangulation and you have the Voronoi diagram. (Dual graph means that for every for every edge AB in the triangulation there exists an edge in the VD bisecting the space between the two vertices A and B, and for every triangle there exists a vertex in the VD where the dual edges meet.) Othwerwise there are also many C# Voronoi implementations around: Unity-delaunay, but as you mentioned using the Fortune approach.
If you want to code everything yourself you may compute a triangulation of the points with brute force for n points in O(n^2) time. Then apply in-circle tests and edge flips. That is, for every triangle t(abc) create a circle C defined by the three vertices of t. Then check if there lies another point d of your point set inside C. If so, then flip the edge that is in t as well as forms an edge in the triangle with d. This flipping is done until all triangles fulfil the empty circle property (Delaunay condition). Again with brute force will take O(n^2) time. Then you can compute the dual graph as mentioned above.
(Image Source)
"Easiest? That's the brute-force approach: For each pixel in your output, iterate through all points, compute distance, use the closest. Slow as can be, but very simple. If performance isn't important, it does the job."
[1] Easiest algorithm of Voronoi diagram to implement?

Machine learning to recognize different dollar bills [duplicate]

I'm having some images, of euro money bills. The bills are completely within the image
and are mostly flat (e.g. little deformation) and perspective skew is small (e.g. image quite taken from above the bill).
Now I'm no expert in image recognition. I'd like to achieve the following:
Find the boundingbox for the money bill (so I can "cut out" the bill from the noise in the rest of the image
Figure out the orientation.
I think of these two steps as pre-processing, but maybe one can do the following steps without the above two. So with that I want to read:
The bills serial-number.
The bills face value.
I assume this should be quite possible to do with OpenCV. I'm just not sure how to approach it right. Would I pick a FaceDetector like approach or houghs or a contour detector on an edge detector?
I'd be thankful for any further hints for reading material as well.
Hough is great but it can be a little expensive
This may work:
-Use Threshold or Canny to find the edges of the image.
-Then cvFindContours to identify the contours, then try to detect rectangles.
Check the squares.c example in opencv distribution. It basically checks that the polygon approximation of a contour has 4 points and the average angle betweeen those points is close to 90 degrees.
Here is a code snippet from the squares.py example
(is the same but in python :P ).
..some pre-processing
cvThreshold( tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY );
# find contours and store them all as a list
count, contours = cvFindContours(gray, storage)
if not contours:
continue
# test each contour
for contour in contours.hrange():
# approximate contour with accuracy proportional
# to the contour perimeter
result = cvApproxPoly( contour, sizeof(CvContour), storage,
CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0 );
res_arr = result.asarray(CvPoint)
# square contours should have 4 vertices after approximation
# relatively large area (to filter out noisy contours)
# and be convex.
# Note: absolute value of an area is used because
# area may be positive or negative - in accordance with the
# contour orientation
if( result.total == 4 and
abs(cvContourArea(result)) > 1000 and
cvCheckContourConvexity(result) ):
s = 0;
for i in range(4):
# find minimum angle between joint
# edges (maximum of cosine)
t = abs(angle( res_arr[i], res_arr[i-2], res_arr[i-1]))
if s<t:
s=t
# if cosines of all angles are small
# (all angles are ~90 degree) then write quandrange
# vertices to resultant sequence
if( s < 0.3 ):
for i in range(4):
squares.append( res_arr[i] )
-Using MinAreaRect2 (Finds circumscribed rectangle of minimal area for given 2D point set), get the bounding box of the rectangles. Using the bounding box points you can easily calculate the angle.
you can also find the C version squares.c under samples/c/ in your opencv dir.
There is a good book on openCV
Using a Hough transform to find the rectangular bill shape (and angle) and then find rectangles/circles within it should be quick and easy
For more complex searching, something like a Haar classifier - if you needed to find odd corners of bills in an image?
You can also take a look at the Template Matching methods in OpenCV; another option would be to use SURF features. They let you search for symbols & numbers in size, angle etc. invariantly.

How to find a random point in a quadrangle?

I have to be able to set a random location for a waypoint for a flight sim. The maths challenge is straightforward:
"To find a single random location within a quadrangle, where there's an equal chance of the point being at any location."
Visually like this:
An example ABCD quadrangle is:
A:[21417.78 37105.97]
B:[38197.32 24009.74]
C:[1364.19 2455.54]
D:[1227.77 37378.81]
Thanks in advance for any help you can provide. :-)
EDIT
Thanks all for your replies. I'll be taking a look at this at the weekend and will award the accepted answer then. BTW I should have mentioned that the quadrangle can be CONVEX OR CONCAVE. Sry 'bout dat.
Split your quadrangle into two triangles and then use this excellent SO answer to quickly find a random point in one of them.
Update:
Borrowing this great link from Akusete on picking a random point in a triangle.
(from MathWorld - A Wolfram Web Resource: wolfram.com)
Given a triangle with one vertex at
the origin and the others at positions v1
and v2, pick
(from MathWorld - A Wolfram Web Resource: wolfram.com)
where A1
and A2 are uniform
variates in the interval [0,1] , which gives
points uniformly distributed in a
quadrilateral (left figure). The
points not in the triangle interior
can then either be discarded, or
transformed into the corresponding
point inside the triangle (right
figure).
I believe there are two suitable ways to solve this problem.
The first mentioned by other posters is to find the smallest bounding box that encloses the rectangle, then generate points in that box until you find a point which lies inside the rectangle.
Find Bounding box (x,y,width, height)
Pick Random Point x1,y1 with ranges [x to x+width] and [y to y+height]
while (x1 or y1 is no inside the quadrangle){
Select new x1,y1
}
Assuming your quadrangle area is Q and the bounding box is A, the probability that you would need to generate N pairs of points is 1-(Q/A)^N, which approaches 0 inverse exponentially.
I would reccommend the above approach, espesially in two dimensions. It is very fast to generate the points and test.
If you wanted a gaurentee of termination, then you can create an algorithm to only generate points within the quadrangle (easy) but you must ensure the probablity distribution of the points are even thoughout the quadrangle.
http://mathworld.wolfram.com/TrianglePointPicking.html
Gives a very good explination
The "brute force" approach is simply to loop through until you have a valid coordinate. In pseudocode:
left = min(pa.x, pb.x, pc.x, pd.x)
right = max(pa.x, pb.x, pc.x, pd.x)
bottom = min(pa.y, pb.y, pc.y, pd.y)
top = max(pa.y, pb.y, pc.y, pd.y)
do {
x = left + fmod(rand, right-left)
y = bottom + fmod(rand, top-bottom)
} while (!isin(x, y, pa, pb, pc, pd));
You can use a stock function pulled from the net for "isin". I realize that this isn't the fastest-executing thing in the world, but I think it'll work.
So, this time tackling how to figure out if a point is within the quad:
The four edges can be expressed as lines in y = mx + b form. Check if the point is above or below each of the four lines, and taken together you can figure out if it's inside or outside.
Are you allowed to just repeatedly try anywhere within the rectangle which bounds the quadrangle, until you get something within the quad? Might this even be faster than some fancy algorithm to ensure that you pick something within the quad?
Incidentally, in that problem statement, I think the use of the word "find" is confusing. You can't really find a random value that satisfies a condition; the randomizer just gives it to you. What you're trying to do is set parameters on the randomizer to give you values matching certain criteria.
I would divide your quadrangle into multiple figures, where each figure is a regular polygon with one side (or both sides) parallel to one of the axes. For eg, for the figure above, I would first find the maximum rectangle that fits inside the quadrangle, the rectangle has to be parallel to the X/Y axes. Then in the remaining area, I would fit triangles, such triangles will be adjacent to each side of the rectangle.
then it is simple to write a function:
1) get a figure at random.
2) find a random point in the figure.
If the figure chosen in #1 is a rectangle, it should be pretty easy to find a random point in it. The tricky part is to write a routine which can find a random point inside the triangle
You may randomly create points in a bound-in-box only stopping after you find one that it's inside your polygon.
So:
Find the box that contains all the points of your polygon.
Create a random point inside the bounds of the previously box found. Use random functions to generate x and y values.
Check if that point is inside the polygon (See how here or here)
If that point is inside the polygon stop, you're done, if not go to step 2
So, it depends on how you want your distribution.
If you want the points randomly sampled in your 2d view space, then Jacob's answer is great. If you want the points to be sort of like a perspective view (in your example image, more density in top right than bottom left), then you can use bilinear interpolation.
Bilinear interpolation is pretty easy. Generate two random numbers s and t in the range [0..1]. Then if your input points are p0,p1,p2,p3 the bilinear interpolation is:
bilerp(s,t) = t*(s*p3+(1-s)*p2) + (1-t)*(s*p1+(1-s)*p0)
The main difference is whether you want your distribution to be uniform in your 2d space (Jacob's method) or uniform in parameter space.
This is an interesting problem and there's probably as really interesting answer, but in case you just want it to work, let me offer you something simple.
Here's the algorithm:
Pick a random point that is within the rectangle that bounds the quadrangle.
If it is not within the quadrangle (or whatever shape), repeat.
Profit!
edit
I updated the first step to mention the bounding box, per Bart K.'s suggestion.

Categories

Resources