Leaflet Maps with Custom Tiles Zoom offset x y - c#

I'm trying to use leaflet to render large images using x,y coordinates like so:
var map = L.map('map', {
crs: L.CRS.Simple,
attributionControl: false,
reuseTiles:true,
}).setView([0, 0], 1);
The problem is that when I zoom I seem to get an offset. So as I continually zoom in the map appears to shift.
I am drawing the image on the backend using C# and GDI+ so it's quite possible that I am getting code this wrong:
private void DrawLine(int x, int y, int z, int squareSize, Graphics g, Shape shape, Pen drawPen)
{
Line line = (Line)shape;
var scalingFactor = 0.1;
var zoom = (z * (scalingFactor));
double startScaledX = (line.StartPoint.X * zoom) + ((squareSize * -1) * x);
double startScaledY = (line.StartPoint.Y * -1 * zoom) + ((squareSize * -1) * y);
double endScaledX = (line.EndPoint.X * zoom) + ((squareSize * -1) * x);
double endScaledY = (line.EndPoint.Y * -1 * zoom) + ((squareSize * -1) * y);
var width = Math.Abs(endScaledX - startScaledX);
var height = Math.Abs(endScaledY - startScaledY);
var startPoint = new System.Drawing.PointF((float)startScaledX, (float)startScaledY);
var endPoint = new System.Drawing.PointF((float)endScaledX, (float)endScaledY);
var rectDrawBounds = (new RectangleF((float)startScaledX, (float)startScaledY, (float)width, (float)height));
var rectTileBounds = new RectangleF(0, 0, 256, 256);
g.DrawLine(drawPen, startPoint, endPoint);
}
I have noticed that if I zoom in and out at [0,0] then the zoom works perfectly. Everything else seems to shift the map.
I would appreciate any help that you can offer.

In Leaflet's L.CRS.Simple, the map scale grows by a factor of 2 every zoom level. In other words:
scale = 2**z;
or
scale = Math.pow(2,z);
or
scale = 1<<z;
or
At zoom level 0, a 256-pixel tile covers 256 map units. One map unit spans over 1 pixel.
At zoom level 1, a 256-pixel tile covers 128 map units. One map unit spans over 2 pixels.
At zoom level 2, a 256-pixel tile covers 64 map units. One map unit spans over 4 pixels.
At zoom level n, a 256-pixel tile covers 256/2n map units. One map unit spans over 2n pixels.
For reference, see https://github.com/Leaflet/Leaflet/blob/master/src/geo/crs/CRS.Simple.js
Fix your z, scalingFactor and zoom calculations and relationships accordingly.

Related

Finding the height on virtual terrain

I generated a virtual terrain consisting of quads in my code I am now trying to find the height of the terrain at a certain point. To clarify: I have a terrain with a width and depth in X and Y directions, and a height in the Z direction. I want to know at what Z a line at a specific X and Y intersects my plane.
The terrain itself is stored as quads in a two-dimensional array (the indices are the coords, I just store the height) and I'm using the following code:
(it uses the cross product of the vectors from the bottom left to bottom right and top left points)
function getTerrainHeight(float x, float y) {
int ix = (int)x;
int iy = (int)y;
Vector3 V1 = new Vector3(ix,iy,heights[ix][iy]);
Vector3 V2 = new Vector3(ix+1, iy, heights[ix + 1][iy]);
Vector3 V3 = new Vector3(ix, iy+1, heights[ix][iy+1]);
if ((x-ix) + (y-iy) > 1)
{
V1 = new Vector3(ix + 1, iy + 1, heights[ix + 1][iy + 1]);
}
Vector3 cross = Vector3.Cross(V2-V1,V3-V1);
return (cross.X * (x - ix) + cross.Y * (y - iy)) / -cross.Z + heights[ix][iy];
}
This kinda works, but there are some mismatches, when I go over the terrain there are alway some dents where the height is lower than it should be. Does anybody know what's going wrong?

Problems with centering scene

I have to display stl models with openGL. (SharpGL.) I'd like to set the initial view, so that the model is at the center of the screen and approximately fills it. I've calculated the bounding cube of the models and set the view like this: (sceneBox is a Rect3D - it stores the location of the left-back-bottom corner and the sizes)
// Calculate viewport properties
double left = sceneBox.X;
double right = sceneBox.X + sceneBox.SizeX;
double bottom = sceneBox.Y;
double top = sceneBox.Y + sceneBox.SizeY;
double zNear = 1.0;
double zFar = zNear + 3 * sceneBox.SizeZ;
double aspect = (double)this.ViewportSize.Width / (double)this.ViewportSize.Height;
if ( aspect < 1.0 ) {
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}
// Create a perspective transformation.
gl.Frustum(
left / ZoomFactor,
right / ZoomFactor,
bottom / ZoomFactor,
top / ZoomFactor,
zNear,
zFar);
// Use the 'look at' helper function to position and aim the camera.
gl.LookAt(
0, 0, 2 * sceneBox.SizeZ,
sceneBox.X + 0.5 * sceneBox.SizeX, sceneBox.Y + 0.5 * sceneBox.SizeY, sceneBox.Z - 0.5 * sceneBox.SizeZ,
0, 1, 0);
This works nice with my small, hand-made test model: (it has a box size of 2*2*2 units)
This is exactly what I want. (The yellow lines show the bounding box)
But, when I load an stl model, which is about 60*60*60 units big, I get this:
It's very small and too far up.
What should I change to make it work?
Here's the full thing: https://dl.dropbox.com/u/17798054/program.zip
You can find this model in the zip as well. The quoted code is in KRGRAAT.SZE.Control.Engine.GLEngine.UpdateView()
Apparently the problem are the arguments you are using in lookAt function. If you have calculated bounding cube all you need to do is to place it in the distance (eyeZ) from the camera of
sizeX/tan(angleOfPerspective)
where sizeX is width of Quad of which cube is built, angleOfPerspective is first parameter of GlPerspective of course having centerX == posX == centreX of the front quad and centerY == posY == centreY of the front quad and frustum is not necessary
lookAt reference http://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml
So, to clarify Arek's answer, this is how I fixed it:
// Calculate viewport properties
double zNear = 1.0;
double zFar = zNear + 10 * sceneBox.SizeZ; // had to increase zFar
double aspect = (double)this.ViewportSize.Width / (double)this.ViewportSize.Height;
double angleOfPerspective = 60.0;
double centerX = sceneBox.X + 0.5 * sceneBox.SizeX;
double centerY = sceneBox.Y + 0.5 * sceneBox.SizeY;
double centerZ = sceneBox.Z + 0.5 * sceneBox.SizeZ;
// Create a perspective transformation.
gl.Perspective( // swapped frustum for perspective
angleOfPerspective / ZoomFactor, // moved zooming here
aspect,
zNear,
zFar);
// Use the 'look at' helper function to position and aim the camera.
gl.LookAt(
centerX, centerY, sceneBox.SizeX / Math.Tan(angleOfPerspective), // changed eye position
centerX, centerY, -centerZ,
0, 1, 0);

Kinect Cursor Control with virtual XNA-Rectangle-'Touchpad' - Y-Axis inverted

I'm trying to implement a Real-Time Strategy control scheme for the MS Kinect.
So far, I've got a cursor, which can be moved by moving your left Hand (or right, dependant on your handedness). I've got an Open-NI-based Kinect controller which sets up a skeleton for player-movements and delivers the wrist-, elbow-, shoulder- and body-center-coordinates to my application.
To project these wrist-coordinates to the screen, I've set up a Rectangle, which is situated slightly left/right from the player's center and as long as the wrist moves inside the rectangle, the cursor moves on screen.
My problem is, that the XNA-Rectangle has the upper left corner as point of origin, i.e. the X-axis points right, as it "should", but the Y-axis points down, while the Y-axis of the Kinect - coordinate system points up. This results in the cursor moving upwards on screen, when I move my hand down and vice versa. There's no way for me to change anything with the Kinect-coordinate system, so is it possible to 'flip' the 'coordinate system' of the rectangle, so that it's Y-axis points up,too?
Here's the relevant code:
(from Calibrate()-Method:)
List<Vector3> joints = UDPlistener.getInstance().ParseCalibCoordinates(data);
//0 = Right Wrist 1 = Right Elbow 2 = Right Shoulder
//3 = Left Wrist 4 = Left Elbow 5 = Left Shoulder
//6 = Center
height = 762;
width = 1024;
switch (hand)
{
case 0:
cursorSpace = new Rectangle((int)(joints[6].X * 2) - 200, (int)(joints[6].Y * 2) + height, width, height);
break;
case 3:
cursorSpace = new Rectangle((int)(joints[6].X * 2) - 1200, (int)(joints[6].Y * 2) + height, width, height);
break;
}
public Point Cursor(String data)
{
List<Vector3> joints = UDPlistener.getInstance().ParsePlayCoordinates(data);
//0 = Right Wrist 1 = Left Wrist 2 = Center
double mhx = 0; //main hand x-coordinate
double mhy = 0; // main hand y-coordinate
switch (hand)
{
case 0:
mhx = joints[hand].X;
mhy = joints[hand].Y;
break;
case 3:
mhx = joints[hand-2].X;
mhy = joints[hand-2].Y;
break;
}
int x;
int y;
if (Math.Abs(mhx - mhxOld) < 1.0 || Math.Abs(mhy - mhyOld) < 1.0)
//To remove jittering of the cursor
{
x = (int) mhxOld * 2;
y = (int) mhyOld * 2;
}
else
{
x = (int) mhx * 2;
mhxOld = mhx;
y = (int) mhy * 2;
mhyOld = mhy;
}
Point cursor = new Point(0,0);
if (cursorSpace.Contains(x,y))
{
cursor = new Point(x - cursorSpace.X, y - CursorSpace.Y);
lastCursorPos = cursor;
return cursor;
}
Sorry for the wall of text, I hope, I could make myself clear.
Thanks in advance,
KK
I use an extension method for converting OpenNI coordinates. The following example maps the OpenNI coordinates to XNA coordinates in a 640x480 rectangle in the top left corner, represented as a Vector2 object.
public static Vector2 ToXnaCoordinates(this Point3D point)
{
return new Vector2(
point.X + 320,
(point.Y - 240) * -1);
}
The magic that flips the y coordinate is the * -1 part.
If you want to reach a rectangle of different size than 640x480, you need to scale the coordinates accordingly after conversion. Example:
public static Vector2 ToScaledXnaCoordinates(this Point3D point, int rectSizeX, int rectSizeY)
{
return new Vector2(
(point.X + 320) * rectSizeX / 640,
(point.Y - 240) * -rectSizeY / 480);
}
I know this isn't XNA, but I wanted to put this out there for those wpf users:) If you are using something like Channel 9's approach, just have a bool to determine if inverted or not. Example:
private void ScalePosition(FrameworkElement element, Joint joint, bool inverted)
{
//convert the value to X/Y
Joint scaledJoint = joint.ScaleTo(967, 611);
//convert & scale (.3 = means 1/3 of joint distance)
//Joint scaledJoint = joint.ScaleTo(1280, 720, 1f, 1f);
if (!inverted)
{
Canvas.SetLeft(element, scaledJoint.Position.X);
Canvas.SetTop(element, scaledJoint.Position.Y);
}
if (inverted)
{
Canvas.SetLeft(element, scaledJoint.Position.X);
Canvas.SetBottom(element, scaledJoint.Position.Y);
}
}
Hope this helps WPF users!

Silverlight Rotate & Scale a bitmap image to fit within rectangle without cropping

I need to rotate a WriteableBitmap and scale it down or up before it gets cropped.
My current code will rotate but will crop the edges if the height is larger then the width.
I assume I need to scale?
public WriteableBitmap Rotate(WriteableBitmap Source, double Angle)
{
RotateTransform rt = new RotateTransform();
rt.Angle = Angle;
TransformGroup transform = new TransformGroup();
transform.Children.Add(rt);
Image tempImage2 = new Image();
WriteableBitmap wb;
rt.CenterX = Source.PixelWidth / 2;
rt.CenterY = Source.PixelHeight / 2;
tempImage2.Width = Source.PixelWidth;
tempImage2.Height = Source.PixelHeight;
wb = new WriteableBitmap((int)(Source.PixelWidth), Source.PixelHeight);
tempImage2.Source = Source;
tempImage2.UpdateLayout();
wb.Render(tempImage2, transform);
wb.Invalidate();
return wb;
}
How do I scale down the image so it will not be cropped? Or is there another way?
You need to calculate the scaling based on the rotation of the corners relative to the centre.
If the image is a square only one corner is needed, but for a rectangle you need to check 2 corners in order to see if a vertical or horizontal edge is overlapped. This check is a linear comparison of how much the rectangle's height and width are exceeded.
Click here for the working testbed app created for this answer (image below): (apologies, all my website content was lost thanks to a non-awesome hosting company)
double CalculateConstraintScale(double rotation, int pixelWidth, int pixelHeight)
The pseudo-code is as follows (actual C# code at the end):
Convert rotation angle into Radians
Calculate the "radius" from the rectangle centre to a corner
Convert BR corner position to polar coordinates
Convert BL corner position to polar coordinates
Apply the rotation to both polar coordinates
Convert the new positions back to Cartesian coordinates (ABS value)
Find the largest of the 2 horizontal positions
Find the largest of the 2 vertical positions
Calculate the delta change for horizontal size
Calculate the delta change for vertical size
Return width/2 / x if horizontal change is greater
Return height/2 / y if vertical change is greater
The result is a multiplier that will scale the image down to fit the original rectangle regardless of rotation.
*Note: While it is possible to do much of the maths using matrix operations, there are not enough calculations to warrant that. I also thought it would make a better example from first-principles.
C# Code:
/// <summary>
/// Calculate the scaling required to fit a rectangle into a rotation of that same rectangle
/// </summary>
/// <param name="rotation">Rotation in degrees</param>
/// <param name="pixelWidth">Width in pixels</param>
/// <param name="pixelHeight">Height in pixels</param>
/// <returns>A scaling value between 1 and 0</returns>
/// <remarks>Released to the public domain 2011 - David Johnston (HiTech Magic Ltd)</remarks>
private double CalculateConstraintScale(double rotation, int pixelWidth, int pixelHeight)
{
// Convert angle to radians for the math lib
double rotationRadians = rotation * PiDiv180;
// Centre is half the width and height
double width = pixelWidth / 2.0;
double height = pixelHeight / 2.0;
double radius = Math.Sqrt(width * width + height * height);
// Convert BR corner into polar coordinates
double angle = Math.Atan(height / width);
// Now create the matching BL corner in polar coordinates
double angle2 = Math.Atan(height / -width);
// Apply the rotation to the points
angle += rotationRadians;
angle2 += rotationRadians;
// Convert back to rectangular coordinate
double x = Math.Abs(radius * Math.Cos(angle));
double y = Math.Abs(radius * Math.Sin(angle));
double x2 = Math.Abs(radius * Math.Cos(angle2));
double y2 = Math.Abs(radius * Math.Sin(angle2));
// Find the largest extents in X & Y
x = Math.Max(x, x2);
y = Math.Max(y, y2);
// Find the largest change (pixel, not ratio)
double deltaX = x - width;
double deltaY = y - height;
// Return the ratio that will bring the largest change into the region
return (deltaX > deltaY) ? width / x : height / y;
}
Example of use:
private WriteableBitmap GenerateConstrainedBitmap(BitmapImage sourceImage, int pixelWidth, int pixelHeight, double rotation)
{
double scale = CalculateConstraintScale(rotation, pixelWidth, pixelHeight);
// Create a transform to render the image rotated and scaled
var transform = new TransformGroup();
var rt = new RotateTransform()
{
Angle = rotation,
CenterX = (pixelWidth / 2.0),
CenterY = (pixelHeight / 2.0)
};
transform.Children.Add(rt);
var st = new ScaleTransform()
{
ScaleX = scale,
ScaleY = scale,
CenterX = (pixelWidth / 2.0),
CenterY = (pixelHeight / 2.0)
};
transform.Children.Add(st);
// Resize to specified target size
var tempImage = new Image()
{
Stretch = Stretch.Fill,
Width = pixelWidth,
Height = pixelHeight,
Source = sourceImage,
};
tempImage.UpdateLayout();
// Render to a writeable bitmap
var writeableBitmap = new WriteableBitmap(pixelWidth, pixelHeight);
writeableBitmap.Render(tempImage, transform);
writeableBitmap.Invalidate();
return writeableBitmap;
}
I released a Test-bed of the code on my website so you can try it for real - click to try it (apologies, all my website content was lost thanks to a non-awesome hosting company)

trying to render an equirectangular panorama

I have an equirectangular panorama source image which is 360 degrees of longitude and 120 degrees of latitude.
I want to write a function which can render this, given width and height of the viewport and a rotation in longitude. I want to can my output image so that it's the full 120 degrees in height.
has anyone got any pointers? I can't get my head around the maths on how to transform from target coordinates back to source.
thanks
slip
Here is my code so far:- (create a c# 2.0 console app, add a ref to system.drawing)
static void Main(string[] args)
{
Bitmap src = new Bitmap(#"C:\Users\jon\slippyr4\pt\grid2.jpg");
// constant stuff
double view_width_angle = d2r(150);
double view_height_angle = d2r(120);
double rads_per_pixel = 2.0 * Math.PI / src.Width;
// scale everything off the height
int output_image_height = src.Width;
// compute radius (from chord trig - my output image forms a chord of a circle with angle view_height_angle)
double radius = output_image_height / (2.0 * Math.Sin(view_height_angle / 2.0));
// work out the image width with that radius.
int output_image_width = (int)(radius * 2.0 * Math.Sin(view_width_angle / 2.0));
// source centres for later
int source_centre_x = src.Width / 2;
int source_centre_y = src.Height / 2;
// work out adjacent length
double adj = radius * Math.Cos(view_width_angle / 2.0);
// create output bmp
Bitmap dst = new Bitmap(output_image_width, output_image_height);
// x & y are output pixels offset from output centre
for (int x = output_image_width / -2; x < output_image_width / 2; x++)
{
// map this x to an angle & then a pixel
double x_angle = Math.Atan(x / adj);
double src_x = (x_angle / rads_per_pixel) + source_centre_x;
// work out the hypotenuse of that triangle
double x_hyp = adj / Math.Cos(x_angle);
for (int y = output_image_height / -2; y < output_image_height / 2; y++)
{
// compute the y angle and then it's pixel
double y_angle = Math.Atan(y / x_hyp);
double src_y = (y_angle / rads_per_pixel) + source_centre_y;
Color c = Color.Magenta;
// this handles out of range source pixels. these will end up magenta in the target
if (src_x >= 0 && src_x < src.Width && src_y >= 0 && src_y < src.Height)
{
c = src.GetPixel((int)src_x, (int)src_y);
}
dst.SetPixel(x + (output_image_width / 2), y + (output_image_height / 2), c);
}
}
dst.Save(#"C:\Users\slippyr4\Desktop\pana.jpg");
}
static double d2r(double degrees)
{
return degrees * Math.PI / 180.0;
}
With this code, i get the results i expect when i set my target image width to 120 degrees. I see the right curvature of horizontal lines etc, as below, and when i try it with a real-life equirectangular panorama, it looks like commercial viewers render.
But, when i make the output image wider, it all goes wrong. You start to see the invalid pixels in a parabola top and bottom at the centre, as shown here with the image 150 degrees wide by 120 degrees high:-
What commericial viewers seem to do is sort of zoom in - so the in the centre, the image is 120 degrees high and therefore at the sides, more is clipped. and therfore, there is no magenta (ie, no invalid source pixels).
But i can't get my head around how to do that in the maths.
This isn't homework, it's a hobby project. hence why i am lacking the understanding of what is going on!. Also, please forgive the severe inefficeincy of the code, i will optimise it when i have it working propertly.
thanks again

Categories

Resources