Google Maps zoom algorithm - c#

In my web application, I have coordinate bounds that I'd like to modify to simulate "zooming in" on google maps. It seems that zooming back and forth with google maps simply doubles or halves each side of the bounding box. So my function to do this on the server looks like this:
private static decimal[] ZoomIn(decimal latMax, decimal lngMin, decimal latMin, decimal lngMax)
{
decimal lngAdjustment = (lngMax - lngMin) * .25m;
decimal latAdjustment = (latMax - latMin) * .25m;
return new[]
{
// North
latMax - latAdjustment,
// West
lngMin + lngAdjustment,
// South
latMin + latAdjustment,
// East
lngMax - lngAdjustment
};
}
The problem with this approach is the latAdjustment that I'm doing. When compared to an actual zoom using the google maps control, this ends up being fairly accurate when zoomed in to city level. However, it is less accurate the further the view is zoomed out. I assume this is due to the Mercator projection of the earth that google maps uses. Is anyone aware of a better formula or method to use to simulate a "zoom"?
Update:
My issue has more to do with not knowing the correct formula than google maps. Let me illustrate using some sample numbers.
Take Chicago, centered at 41.8563226156679,-87.7339862646484
And a bounding box surrounding that point:
West: -87.984955197753800
East: -87.483017331542900
South: 41.546931599561100
North: 42.164224124684300
By observing the behavior of google maps, this bounding box zoomed in one level will be:
- West: -87.859470731201100
- East: -87.608501798095600
- South: 41.701813184067600
- North: 42.010459667660800
(center is kept the same, there is no variance due to mouse movement, etc. I just used the manual zoom button, not the mouse)
Using my formula above, longitude will values will come out correct. Example:
west = west + (west - east) * .25
-87.984955197753800 + ((-87.483017331542900 - -87.984955197753800) * .25) = -87.859470731201075
However, the same formula will not work when dealing with latitude. It is near the correct values, but off by just enough that the map shifts noticeably. This effect is worse with larger bounding boxes and/or when the latitude is further from the equator. I assume this is due to the mercator projection of the earth. Trig class was a long time ago for me, and at this point I'm unable to find a suitable formula for zoom latitude for this situation.

Judging by the fact that your testing method uses the mouse, it seems to me that your testing method is the problem. Since the map zooms in differently based on the centering of the mouse, not the center of the screen, the edge bounds vary, but not the difference between them. This seems to coincide with the details you have given.
Intuitively, there is nothing wrong with your formula -- just how you tested it.

Related

How to detect how many ups and downs are in a point cloud

From a real time signal adquisition, I'm getting 8400 points and I need to graph them.
My problem is that there is a lot of noise in the data, Is there an alghorythm that reduce the noise?
I need to know how many "plateaus" are there?
to something like:
figures
You can probably isolate the plateaus by means of a sliding window in which you compute the range (maximum value minus minimum value). Observe the resulting signal and see what threshold will discriminate.
Below is what you obtain by an horizontal morphological erosion, followed by counting the white pixels vertically. The slopes between the plateaus are very distinctive.
After segmenting the cloud, fitting the plateaus is easy.
I would:
compute BBOX or OBB of PCL
in case your PCL can have any orientation use OBB or simply find 2 most distant points in PCL and use that as major direction.
sort the PCL's BBOX major by axis (biggest side of BBOX or OBB)
In case your data has always the same orientation you can skip #1, for non axis aligned orientation just sot by
dot(pnt[i]-p0,p1-p0)
where p0,p1 are endpoints of major side of OBB or most distant points in PCL and pnt[i] are the points from your PCL.
use sliding average to filter out noise
so just a "curve" remains and not that zig-zag pattern your filtered image shows.
threshold slope change
let call detected changes + (increasing slope) and - (decreasing slope) so you just remember position (index in sorted PCL) of each and then detect these patterns:
UP (positive peak): + - (here is your UP) -
DOWN (negative peak): - + (here is your DOWN) +
to obtain the slope you can simply use atan2 ...

Input a geographic coordinate, return a coordinate within x miles of that inputted coordinate - C# .NET

As the title suggests, I am trying to generate a coordinate based on another coordinate that is within an x mile (or whichever unit is most convenient) radius of the inputted one.
As an example:
I am given a geographic coordinate (lat, lon) of 39.083056, -94.820200.
I want to be returned another set of coordinates that is within a x
miles radius of that coordinate, such as 39.110998, -94.799668.
The x mile radius isn't as important as the fact that the returned
coordinates are within that x mile radius.
I have searched and searched, but I must be searching the wrong thing because all the posts that I have been able to find seem like they get very close to what I am trying to do but aren't quite hitting the nail on the head.
I'm sorry you're being downvoted to oblivion. I understand it can be frustrating trying to search for something without knowing what exactly to search for.
You may be interested in Orthodromic Lines/Distances: wiki. If this answer doesn't fulfil your needs, at least you have a new term to google and hopefully will lead you to one that does suit.
You could try using the Geo library. Its documentation is on the sparse side, but it does contain a method that could be useful to you: CalculateOrthodromicLine(startPoint, heading, distance)
A pseudocode would be something as simple as this:
var startPoint = new Coordinate(lat, long);
var heading = Random between 0 and 360 degrees
var distance = Random between 0 and X metres
var endPoint = //<-- et voila!
GeoContext.Current.GeodeticCalculator
.CalculateOrthodromicLine(startPoint, heading, distance)
.Coordinate2;
Edit: As mentioned in the wiki, the Earth is not a perfect sphere, but a spheroid instead. The library's GeoContext.Current by default uses its Spheroid calculations, so you should be okay.
Good luck!

How can you stitch multiple heightmaps together to remove seams?

I am trying to write an algorithm (in c#) that will stitch two or more unrelated heightmaps together so there is no visible seam between the maps. Basically I want to mimic the functionality found on this page :
http://www.bundysoft.com/wiki/doku.php?id=tutorials:l3dt:stitching_heightmaps
(You can just look at the pictures to get the gist of what I'm talking about)
I also want to be able to take a single heightmap and alter it so it can be tiled, in order to create an endless world (All of this is for use in Unity3d). However, if I can stitch multiple heightmaps together, I should be able to easily modify the algorithm to act on a single heightmap, so I am not worried about this part.
Any kind of guidance would be appreciated, as I have searched and searched for a solution without success. Just a simple nudge in the right direction would be greatly appreciated! I understand that many image manipulation techniques can be applied to heightmaps, but have been unable to find a image processing algorithm that produces the results I'm looking for. For instance, image stitching appears to only work for images that have overlapping fields of view, which is not the case with unrelated heightmaps.
Would utilizing a FFT low pass filter in some way work, or would that only be useful in generating a single tileable heightmap?
Because the algorithm is to be used in Unit3d, any c# code will have to be confined to .Net 3.5, as I believe that's the latest version Unity uses.
Thanks for any help!
Okay, seems I was on the right track with my previous attempts at solving this problem. My initial attemp at stitching the heightmaps together involved the following steps for each point on the heightmap:
1) Find the average between a point on the heightmap and its opposite point. The opposite point is simply the first point reflected across either the x axis (if stitching horizontal edges) or the z axis (for the vertical edges).
2) Find the new height for the point using the following formula:
newHeight = oldHeight + (average - oldHeight)*((maxDistance-distance)/maxDistance);
Where distance is the distance from the point on the heightmap to the nearest horizontal or vertical edge (depending on which edge you want to stitch). Any point with a distance less than maxDistance (which is an adjustable value that effects how much of the terrain is altered) is adjusted based on this formula.
That was the old formula, and while it produced really nice results for most of the terrain, it was creating noticeable lines in the areas between the region of altered heightmap points and the region of unaltered heightmap points. I realized almost immediately that this was occurring because the slope of the altered regions was too steep in comparison to the unaltered regions, thus creating a noticeable contrast between the two. Unfortunately, I went about solving this issue the wrong way, looking for solutions on how to blur or smooth the contrasting regions together to remove the line.
After very little success with smoothing techniques, I decided to try and reduce the slope of the altered region, in the hope that it would better blend with the slope of the unaltered region. I am happy to report that this has improved my stitching algorithm greatly, removing 99% of the lines reported above.
The main culprit from the old formula was this part:
(maxDistance-distance)/maxDistance
which was producing a value between 0 and 1 linearly based on the distance of the point to the nearest edge. As the distance between the heightmap points and the edge increased, the heightmap points would utilize less and less of the average (as defined above), and shift more and more towards their original values. This linear interpolation was the cause of the too step slope, but luckily I found a built in method in the Mathf class of Unity's API that allows for quadratic (I believe cubic) interpolation. This is the SmoothStep Method.
Using this method (I believe a similar method can be found in the Xna framework found here), the change in how much of the average is used in determining a heightmap value becomes very severe in middle distances, but that severity lessens exponentially the closer the distance gets to maxDistance, creating a less severe slope that better blends with the slope of the unaltered region. The new forumla looks something like this:
//Using Mathf - Unity only?
float weight = Mathf.SmoothStep(1f, 0f, distance/maxDistance);
//Using XNA
float weight = MathHelper.SmoothStep(1f, 0f, distance/maxDistance);
//If you can't use either of the two methods above
float input = distance/maxDistance;
float weight = 1f + (-1f)*(3f*(float)Math.Pow(input, 2f) - 2f*(float)Math.Pow(input, 3f));
//Then calculate the new height using this weight
newHeight = oldHeight + (average - oldHeight)*weight;
There may be even better interpolation methods that produce better stitching. I will certainly update this question if I find such a method, so anyone else looking to do heightmap stitching can find the information they need. Kudos to rincewound for being on the right track with linear interpolation!
What is done in the images you posted looks a lot like simple linear interpolation to me.
So basically: You take two images (Left, Right) and define a stitching region. For linear interpolation you could take the leftmost pixel of the left image (in the stitching region) and the rightmost pixel of the right image (also in the stitching region). Then you fill the space in between with interpolated values.
Take this example - I'm using a single line here to show the idea:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
Lets say our overlap is 4 pixels wide:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
^ ^ ^ ^ overlap/stitiching region.
The leftmost value of the left image would be 10
The rightmost value of the right image would be 1.
Now we interpolate linearly between 10 and 1 in 2 steps, our new stitching region looks as follows
stitch = [10, 07, 04, 01]
We end up with the following stitched line:
line = [11,11,11,10,07,04,01,02,02,02]
If you apply this to two complete images you should get a result similar to what you posted before.

Gradual cursor move algorithm - Kinect SDK

I am building a Kinect SDK WPF Applicaiton and using the Kinect to move a "cursor"/hand object.
The problem i am having is that at 30 frames a second the cursor is actually jumping around a bit erratically because of the precision of the Kinect (i.e. while holding your hand still the object moves within a 5px space).
I am planning on writing an algorithm that doesn't simply move the X/Y of my "cursor" sprint to the right position on the screen, but behaves more like a "move the hand towards this X/Y co-ordinate" so that it is a more smooth movement.
Can someone point me to a good one that someone else has written so i can avoid reinventing the wheel.
I understand that this is probably pretty common, but as i am more of a business developer i am not sure of the name for such a feature so apologies in advance if its a n00b question.
When I worked with the Kinect, I just used some simple math (which I think is called linear regression) to move to a point some distance between the cursor's current location and its target location. Get the location of the cursor, get the location the user's hand is at (translated to screen coordinates), then move the cursor to some point between those.
float currentX = ..., currentY = ..., targetX = ..., targetY = ...;
float diffX = targetX - currentX;
float diffY = targetY - currentY;
float delta = 0.5f; // 0 = no movement, 1 = move directly to target point.
currentX = currentX + delta * diffX;
currentY = currentY + delta * diffY;
You'll still get jittering, depending on the delta, but it will be much smoother and generally in a smaller area.
On a related note, have you taken a look at the Kinect's skeleton smoothing parameters? You can actually let the SDK handle some of the filtering.
Consider your input values (those jumping positions) as a signal with both low and high frequency parts. The low frequencies represent the rough position/movement while the high frequency parts contain the fast jumping within smaller distances.
So what you need or look for is a low pass filter. That filters out the high frequency parts and leaves the rough (but as accurate as the Kinect can get) position over, if you manage to set it up with the right parameter. This parameter is the crossover frequency for the filter. You have to play around a bit and you will see.
An implementation example for time-discrete values would be from here (originally from wikipedia):
static final float ALPHA = 0.15f;
protected float[] lowPass( float[] input, float[] output ) {
if ( output == null ) return input;
for ( int i=0; i<input.length; i++ ) {
output[i] = output[i] + ALPHA * (input[i] - output[i]);
}
return output;
}
You can put the last values of both the X and Y components of your position vectors into this function to smooth them out (input[0] for X and input[1] for Y, output[0] and output[1] are results of the previous function call).
Like I already said, you have to find a good balance for the smoothing factor ALPHA (0 ≤ ALPHA ≤ 1):
Too big and the signal will not get smoothed enough, the effect wont be sufficient
Too small and the signal will be smoothed 'too much', the cursor will lag behind the users movement, too much inertia
(If you look at the formula newout = out + alpha * (in - out), you see that with a alpha value of 0, you just take the old out value again, therefore the value will never change; while with a value of 1 you have newout = out + in - out that means you dont smooth anything but always take the newest value)
One very simple idea for solving this problem would be to display the cursor at a location that's the average of some past number of positions. For example, suppose that you track the last five locations of the hand and then display the cursor at that position. Then if the user's hand is relatively still, the jerkiness from frame to frame should be reasonably low, because the last five frames will have had the hand in roughly the same position and the noise should cancel out. If the user then moves the cursor across the screen, the cursor will animate as it moves from its old position to the new position, since as you factor in the last five positions of the hand the average position will slowly interpolate between its old and new positions.
This approach is very easily tweaked. You could transform the data points so that old points are weighted more or less than new points, and could adjust the length of the history you keep.
Hope this helps!

Converting GPS coordinates to X, Y, Z coordinates

So here is the basic problem. I am currently working on a GPS system in C# in Unity 3D (the person that has given us the assignment is making us use this program, so I can't do it in anything else).
Now I've run in to a small problem, basically we are able to request (what we think are decimal) coordinates from an android phone, but now we are trying to convert those coordinates to X, Y, Z coordinates. Preferebly X and Z, because we do not actually need height. However everything we have been finding on the internet so far has been for a conversion to a sphere map where as we just have a basic flat digital map.
If anyone knows how to convert the coordinates we have to the basic X and Z coordinates (so our longitude and latitude) it'd be amazing.
To quickly note I am not sure if the sort of coordinates we have are actual decimal coordinates so this is what they look like:
Latitude: 53.228888 Longitude: 6.5403559
these coordinates should end up on "Wegalaan 3, Groningen, The Netherlands" if you would look them up on a map.
Thanks already!
EDIT: (this is also in the comments)
Sorry if it might be confusing. Honestly I only half understand how all this works, anyways to clear some things up. I am currently working in Unity with a simple 2D map I got from the internet of the city I live in (Groningen, The Netherlands) and I am trying to basically take GPS coordinates I get from my android phone and then show them on that map with a red dot, however to do this I need to be able to move the red dot to the right coordinates on the map. What I am trying to do is convert the GPS coords (lon and lat) to X and Z (Unity3D flat coordinates, may also just be X and Y) so that if I align the map right I get a small GPS system for just my city. If you are curious as to why I am doing this it is simply because a friend of mine and me are trying to build a game using our city and this GPS system as a basis
EDIT2:
except that I'll be honest I have no idea how cartesian coordinates work, but they seem to be what I am looking for yes :P Coordinates on a flat plane and with X,Y coords I mean basically just coordinates I could use in Unity3D on a flat 2D plane which is what I am working in.
EDIT3:
Thanks for the answers, to start. This is not a duplicate, secondly my friend and I already found the stackoverflow topic you sent me, but it seems to not be working for us (maybe we did something wrong). Basically the north and south distance between different places we tested with that formula have worked, however the east west distance between them was way bigger than it should have been. We think it might be because that formula was meant for a spherical earth, but maybe we did something wrong. If someone could explain further that'd be amazing!
EDIT4:
We are sure it can't be our map that is wrong in any way, because we have aligned it with multiple locations. We got the coordinates for these locations and then used this website: http://www.gpscoordinaten.nl/converteer-rd-coordinaten.php to convert it to XY coordinates and then used these XY coordinates to check if our map would align properly. It did, so we are sure there is some problem with the maths we are using and not with our actual map.
EDIT5: Removed many, many grammatical errors. It's way too hot over here to be writing properly right now, so I am very very sorry if any of this makes no sense. just let me know and I'll edit to try and explain what we are trying to do.
EDIT6: Found my own asnwer, it is down in between all the other answers if you wanna see what I did to fix my problem
By now I have found the answer to my own question (actually found it a little while ago already, but totally forgot to post it here)
Basically I made a little formula of my own that multiplied the coordinates with a set number (depending on wether or not it is the x or y axis) that is the difference between two set coordinate points. These two points are the outer points of my map.
Basically by doing this I can get quite accurate measurements I haven't even had a meter difference to my actual position yet.
I know this sounds a bit vague and I don't entirely know how to explain it, but for anyone interested here is the code I use:
void RetrieveGPSData()
{
currentGPSPosition = Input.location.lastData;
System.DateTime dateTime = new System.DateTime(1970, 1, 1, 0, 0, 0, 0);
dateTime = dateTime.AddSeconds(currentGPSPosition.timestamp);
float z = latToZ (Input.location.lastData.latitude);
float x = lonToX (Input.location.lastData.longitude);
this.transform.position = new Vector3 (x, 0f, z);
gpsString = ("Last Location: " + Input.location.lastData.latitude.ToString () + " " + Input.location.lastData.longitude.ToString () + " " + Input.location.lastData.altitude.ToString () + " " + Input.location.lastData.horizontalAccuracy.ToString () + " " + dateTime.ToShortDateString() +" "+ dateTime.ToShortTimeString());
}
float latToZ (double lat){
lat = (lat - 53.178469) / 0.00001 * 0.12179047095976932582726898256213;
double z = lat;
return (float)z;
}
float lonToX (double lon){
lon = (lon - 6.503091) / 0.000001 * 0.00728553580298947812081345114627;
double x = lon;
return (float)x;
}
Now for anyone wondering why I took the 53.something and the 6.something off the lon and lat it is because the coordinate of my 0 X and 0 Z point in Unity which corresponds to the bottom left corner of my map, which is one of the two points I talked about that I use for this calculation.
I hope this helps anyone else who might ever be stuck on something similar and if you have any questions feel free to ask them.
-LAKster
Does the map have mercator projection? Is it a world map?
If so, the top of the map is usually around latitude 85, latitude 0 at the center vertically.
longitude 0 is at greenwich (usually center horizontally), 180/-180 is at the antimeridian/date line, at the left and right edges of the map.
When the latitude is -180, x would be 0, when it's 180, x == width of map.
This link should tell you what you need

Categories

Resources