I'm trying to read a GeoTIFF with one band which stores a value between 0 and 3 ( 255 is no maks ). My Goal is to write a little program which takes in a latitude/longitude and returns the fitting pixel value at the geocoordinate in the geotiff.
I downloaded it from here : https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/QQHCIK
And if you wanna take a look at it... heres the download link. https://drive.google.com/file/d/104De_YQN1V8tSbj2uhIPO0FfowjnInnX/view?usp=sharing
However, my code does not work. I cant tell you what is wrong, it just outputs the wrong pixel value every damn time. I guess either my geocordinate to pixel transformation is wrong or theres a huge logic mistake.
This was my first try, it prints the wrong value for the geo-coordinate.
Furthermore it crashes with some geocoordinates, a negative longitude makes it crashes because this will make pixelY negative which causes an exception in the raster method.
GdalConfiguration.ConfigureGdal();
var tif = "C:\\Users\\Lars\\Downloads\\pnv_biome.type_biome00k_c_1km_s0..0cm_2000..2017_v0.1.tif";
var lat = 51.0;
var lon = 8.3;
using (var image = Gdal.Open(tif, Access.GA_ReadOnly)) {
var bOneBand = image.GetRasterBand(1);
var width = bOneBand.XSize;
var height = bOneBand.YSize;
// Geocoordinate ( lat, lon ) to pixel in the tiff ?
var geoTransform = new double[6];
image.GetGeoTransform(geoTransform);
Gdal.InvGeoTransform(geoTransform, geoTransform);
var bOne = new int[1];
Gdal.ApplyGeoTransform(geoTransform, lat, lon, out var pixelXF, out var pixelYF);
var pixelX = (int)Math.Floor(pixelXF);
var pixelY = (int)Math.Floor(pixelYF);
// Read pixel
bOneBand.ReadRaster(pixelX, pixelY, 1, 1, bOne, 1, 1,0,0);
Console.WriteLine(bOne[0]); // bOne[0] contains wrong data
}
My second attempt looks like the following... but also outputs the wrong pixel value for the given coordinates. It also crashes with some geocoordinates.
GdalConfiguration.ConfigureGdal();
var tif = "C:\\Users\\Lars\\Downloads\\pnv_biome.type_biome00k_c_1km_s0..0cm_2000..2017_v0.1.tif";
var lat = 24.7377; // 131.5847
var lon = 131.5847;
using (var image = Gdal.Open(tif, Access.GA_ReadOnly)) {
var bOneBand = image.GetRasterBand(1);
var bOne = new int[1];
// Spatial reference to transform latlng to map coordinates... from python code
var point_srs = new SpatialReference("");
var file_srs = new SpatialReference(image.GetProjection());
point_srs.ImportFromEPSG(4326);
point_srs.SetAxisMappingStrategy(AxisMappingStrategy.OAMS_TRADITIONAL_GIS_ORDER);
var mapCoordinates = new double[2]{ lat, lon};
var transform = new CoordinateTransformation(point_srs, file_srs);
transform.TransformPoint(mapCoordinates);
// Map coordinates to pixel coordinates ?
var geoTransform = new double[6];
image.GetGeoTransform(geoTransform);
Gdal.InvGeoTransform(geoTransform, geoTransform);
Gdal.ApplyGeoTransform(geoTransform, mapCoordinates[0], mapCoordinates[1], out var pixelXF, out var pixelYF);
var pixelX = (int)pixelXF;
var pixelY = (int)pixelYF;
bOneBand.ReadRaster(pixelX, pixelY, 1, 1, bOne, 1, 1, 0,0);
Console.WriteLine(bOne[0]); // bOne[0] contains wrong value
}
What is wrong with my code, why does it output the wrong pixel value ?
Any help appreciated !
Perhaps you are using lat/lon where it should be lon/lat? It would be helpful if you printed some values such as mapCoordinates.
Here is how you can do this with R, for comparison:
library(terra)
r = rast("pnv_biome.type_biome00k_c_1km_s0..0cm_2000..2017_v0.1.tif")
xy = cbind(c(8.3, 131.5847), c(51.0, 24.7377))
extract(r, xy)
# pnv_biome.type_biome00k_c_1km_s0..0cm_2000..2017_v0.1
#1 9
#2 NA
And the zero-based row/col numbers
rowColFromCell(r, cellFromXY(r, xy)) - 1
# [,1] [,2]
#[1,] 4364 22596
#[2,] 7515 37390
And you can use describe (a.k.a. GDALinfo) to get some metadata. E.g.
d = describe("pnv_biome.type_biome00k_c_1km_s0..0cm_2000..2017_v0.1.tif")
d[50]
# [1] "Band 1 Block=43200x1 Type=Byte, ColorInterp=Gray"
Related
I am trying to get the miles between to locations using xamarin essentials. I a have the coordinates for the staring point and destination. The Distance variable equals to 2134.13057845059. Do I need to convert this number to miles? It should equal to 30 miles or so.
var location = await Geolocation.GetLastKnownLocationAsync();
var courtLocation = new Location(Convert.ToDouble(item.Latitude), Convert.ToDouble(item.Longitude));
double distance = Location.CalculateDistance(location, courtLocation, DistanceUnits.Miles);
the last parameter of CalculateDistance is the unit for the return type.
you are requesting km
double distance = Location.CalculateDistance(location, courtLocation, DistanceUnits.Kilometers);
if you want to request miles, use
double distance = Location.CalculateDistance(location, courtLocation, DistanceUnits.Miles);
this works for me, if you are still getting a bad result I would double check your inputs and conversions
var loc1 = new Location(33.887749, -84.345818);
var loc2 = new Location(33.807920, -84.046791);
// result is 18.023...
double distance = Location.CalculateDistance(loc1, loc2, DistanceUnits.Miles);
I have question regarding intersecting line segment with polygon. I have some line (green) that represents some walking path and some restriction polygon (black) that represents polygon.
You can see in image below:
I'm now wondering if it is possible to extract line segments that are outside polygon (red lines in upper left corner)
First, I created polygon using something like this:
var geometryFactory = NtsGeometryServices.Instance.CreateGeometryFactory(srid: 4326);
var vertices = new List<Coordinate>();
foreach (var coords in stringPolygon)
{
var coordinates = coords.Split(",");
var x = double.Parse(coordinates[0]);
var y = double.Parse(coordinates[1]);
var newCoordinate = new Coordinate(y,x);
vertices.Add(newCoordinate);
}
var outerRing = geometryFactory.CreateLinearRing(vertices.ToArray());
spatialData.FieldPolygon = geometryFactory.CreatePolygon(outerRing);
Then created linestring like this:
var vertices = new List<Coordinate>();
foreach (var trip in tripSegments.Data)
{
var newCoordinate = new Coordinate(trip.Lng, trip.Lat);
vertices.Add(newCoordinate);
}
spatialData.TripLine = geometryFactory.CreateLineString(vertices.ToArray());
Tried with Intersection
var intersect = spatialData.FieldPolygon.Boundary.Intersection(spatialData.TripLine);
And also with difference but without an luck
var intersect = spatialData.FieldPolygon.Boundary.Difference(spatialData.TripLine);
Also tried with WKT Reader and combination of intersection and difference like this (I'm wondering if this is even right approach):
var reader = new WKTReader();
var targetMultiPolygon = reader.Read(spatialData.TripLine.ToString());
var bufferPolygon = reader.Read(spatialData.FieldPolygon.Boundary.ToString());
var intersection = targetMultiPolygon.Intersection(bufferPolygon);
var targetClipped = targetMultiPolygon.Difference(intersection);
var wktTargetAfterClip = targetClipped.ToString();
But I got error like this (when using WKT approach).
TopologyException: found non-noded intersection between LINESTRING(26.563827556466403 43.52431484490672, 26.56386783617048 43.52429417990681) and LINESTRING(26.565492785081837 43.52349421574761, 26.56386783617048 43.52429417990681) [ (26.56386783617048, 43.52429417990681, NaN) ]
UPDATE
I've fixed the topology issue whit WKT Reader with
var bufferPolygon = reader.Read(spatialData.FieldPolygon.Boundary.ToString());
on reader I added Boundary property and that fixed topology issue. But main problem still remains as it can be seen here.
I used this block of code to extract lines that don't intersect and that are added to different layer but as you can see in red square there are some lines that are green and they should be purple (dashed) because they are outside black polygon but they are still green.
var outsideLines = new List<ILineString>();
foreach (ILineString lineString in geometries.Geometries)
{
var isIntersecting = lineString.Intersects(spatialData.FieldPolygon.Boundary);
if (!isIntersecting)
{
outsideLines.Add(lineString);
}
}
spatialData.IntersectedMultiLine = geometryFactory.CreateMultiLineString(outsideLines.ToArray());
Now, my main questions is: is it possible to extract lines that are outside polygon and am I on the right track?
I have a constantly feeding point array with a length of 4, and want to filter certain "outliers" in the array.
I'm creating a VR/AR app with Opencvforunity and Unity.
Using a live feed from the webcam, I have an 4-length points array which updates and contains x, y 2d coordinates, representing the four corners of a tracked object. And I'm using them as source values to draw a Rect in unity.
Each slot in array contains data such as this:
{296.64151, 88.096649}
However, Unity throws errors and crashes when the a value in the array has
negative values (sometimes happens because of tracking error)
large values exceeding the canvas size (same reason, currently using 1280 x 720)
An example of a "bad value" will be like this :
{-1745.10614, 46.908913} <- negative / big value on X
{681.00519, 1234.15828} <- big value on Y
So I have to somehow create a filter for the array to make the app to work.
The order should not be altered and the data constantly updates so ignoring/skipping bad values will be optimal. I'm new to C# and I have searched but no good luck for "point array"
Here's my code:
Point[] ptsArray = patternTrackingInfo.points2d.toArray();
pt1 = ptsArray[0];
pt2 = ptsArray[2];
pt3 = new OpenCVForUnity.CoreModule.Point(ptsArray[2].x + 5, ptsArray[2].y + 5);
for (int i = 0; i < 4; i++)
{
cropRect = new OpenCVForUnity.CoreModule.Rect(pt1, pt3);
}
pt1 represents the left-top corner and pt2 for right-bottom.
I heard that the right bottom point is exclusive in OpenCV itself so I tried to add a new point to that(pt3), but still crashing - so I believe it is not related to that matter.
Any suggestions for creating a filter for a point array will be very much helpful. Thank you.
I would just create a new list of Points and loop through the existing list, adding only the valid points to the new list. Then that becomes the list that you convert to an array for your OpenCV calls.
List<Point> filteredList = new List<Point>();
for(int i = 0; i < patternTrackingInfo.points2d.Count; i++)
{
if(/*Do your check here*/)
continue;
filteredList.Add(patternTrackingInfo.points2d[i]);
}
Point[] ptsArray = filteredList.toArray();
pt1 = ptsArray[0];
pt2 = ptsArray[2];
pt3 = new OpenCVForUnity.CoreModule.Point(ptsArray[2].x + 5, ptsArray[2].y + 5);
for (int i = 0; i < 4; i++)
{
cropRect = new OpenCVForUnity.CoreModule.Rect(pt1, pt3);
}
something is not working as it should. If you take alook at the screenshot you will see that the result is weird. The floor of the pavilion is rendered correctly, but the columns are kinda transparent, and the roof is completele weird. I used Assimp.NET to import his mesh from a .obj file. In other engines it looked correctly. Another thing was: if i set CullMode to Back - it will cull the front faces?! I think it could be 3 things: 1,the mesh was imported wrong, or the z Buffer is not working, or maybe i need multiple world matrices (im using only one).
Does maybe anybody know what this could be?!
Screenshot:
Here is some code:
DepthBuffer/DepthStencilView
var depthBufferDescription = new Texture2DDescription
{
Format = Format.D32_Float_S8X24_UInt,
ArraySize = 1,
MipLevels = 1,
Width = BackBuffer.Description.Width,
Height = BackBuffer.Description.Height,
SampleDescription = swapChainDescription.SampleDescription,
BindFlags = BindFlags.DepthStencil
};
var depthStencilViewDescription = new DepthStencilViewDescription
{
Dimension = SwapChain.Description.SampleDescription.Count > 1 || SwapChain.Description.SampleDescription.Quality > 0 ? DepthStencilViewDimension.Texture2DMultisampled : DepthStencilViewDimension.Texture2D
};
var depthStencilStateDescription = new DepthStencilStateDescription
{
IsDepthEnabled = true,
DepthComparison = Comparison.Always,
DepthWriteMask = DepthWriteMask.All,
IsStencilEnabled = false,
StencilReadMask = 0xff,
StencilWriteMask = 0xff,
FrontFace = new DepthStencilOperationDescription
{
Comparison = Comparison.Always,
PassOperation = StencilOperation.Keep,
FailOperation = StencilOperation.Keep,
DepthFailOperation = StencilOperation.Increment
},
BackFace = new DepthStencilOperationDescription
{
Comparison = Comparison.Always,
PassOperation = StencilOperation.Keep,
FailOperation = StencilOperation.Keep,
DepthFailOperation = StencilOperation.Decrement
}
};
Loading mesh files:
public static Mesh Stadafax_ModelFromFile(string path)
{
if (_importContext.IsImportFormatSupported(Path.GetExtension(path)))
{
var imported = _importContext.ImportFile(path, PostProcessSteps.Triangulate | PostProcessSteps.FindDegenerates | PostProcessSteps.FindInstances | PostProcessSteps.FindInvalidData | PostProcessSteps.JoinIdenticalVertices | PostProcessSteps.OptimizeGraph | PostProcessSteps.ValidateDataStructure | PostProcessSteps.FlipUVs);
Mesh engineMesh = new Mesh();
Assimp.Mesh assimpMesh = imported.Meshes[0];
foreach(Face f in assimpMesh.Faces)
{
engineMesh.Structure.Faces.Add(new Rendering.Triangle((uint)f.Indices[0], (uint)f.Indices[1], (uint)f.Indices[2]));
}
List<Vector3D>[] uv = assimpMesh.TextureCoordinateChannels;
for(int i = 0; i < assimpMesh.Vertices.Count; i++)
{
engineMesh.Structure.Vertices.Add(new Vertex(new Vector4(assimpMesh.Vertices[i].X, assimpMesh.Vertices[i].Y, assimpMesh.Vertices[i].Z, 1), RenderColorRGBA.White, new Vector2(uv[0][i].X, uv[0][i].Y)));
}
return engineMesh;
}
else
NoëlEngine.Common.Output.Log("Model format not supported!", "Importeur", true); return null;
}
}
If anybody only has the smallest idea what this could be please just write a comment.
What you see are polygons actually behind others still being drawn above them.
When you configure the depth buffer via DepthStencilStateDescription, you set up the DepthComparison to Comparison.Always. This is not what you want, you want to use Comparison.Less.
What's the logic behind it? Every depth value for a pixel is checked whether it can actually write to the depth buffer. This check is configured with the Comparison you specified.
Comparison.Always always allows the new value to be written. So no matter if a polygon is actually behind others or above them or whatever, it will always override ("draw above") what's already there - even if it's behind it spatially.
Comparison.Less only writes the value if it is less than the current value in the depth buffer. Don't forget that smaller depth values are closer to the viewer. So a polygon closer to an existing one will override ("draw above") the old one. But if it is behind it, it won't draw. That's exactly what you want.
You can also guess what the other Comparison enumerations now do, and play around with them :)
Im having a lot of data stored as
public class Position{
public double X{get;set;}
public double Y{get;set;}
public double Z{get;set;}
}
Now I would like to find the shortest path between two of these Position objects by going through the array of all Position
Like finding a path in a star map with known star positions, and I want to go from Star A to Star B, which path must I take...
My Position can have doubles with negative numbers
Constraint should be something like, max distance to next position (jumprange), and of course trying to find the path that generates minimum number of Positions i need to go through...
There is a simple mathematical formula for this ,
Let's say u have :
var A = new Position();
var B = new Position();
//Assign value
Using the formula: √(A.X - B.X)^2 + (A.Y - B.Y)^2 + (A.Z - B.Z)^2) we have the following:
public static double Distance(Position A, Position B)
{
var xDifferenceSquared = Math.Pow(A.X - B.X, 2);
var yDifferenceSquared = Math.Pow(A.Y - B.Y, 2);
var zDifferenceSquared = Math.Pow(A.Z - B.Z, 2);
return Math.Sqrt(xDifferenceSquared + yDifferenceSquared + zDifferenceSquared);
}