The NetTopologySuit library gives very different results compared to the SqlGeography library - c#

Update:
I changed the polygon as suggested, and I now get a LineString instead of a MultiLine string, but the difference in coordinates are still the same:
I'm comparing two libraries for working with spatial data, but they are giving me quite different results for the same input.
Here is a test using Microsoft.SqlServer.Types library
// Line from New York to Paris
SqlGeography line = SqlGeography.STLineFromText(new System.Data.SqlTypes.SqlChars("LINESTRING(-73.935242 40.730610, 2.349014 48.864716)"), 4326);
// Polygon in the Atlantic
SqlGeography polygon = SqlGeography.STPolyFromText(new System.Data.SqlTypes.SqlChars("POLYGON((-40 60, -40 30, -20 30, -20 60, -40 60))"), 4326);
SqlGeography intersection = line.STIntersection(polygon);
{LINESTRING (-19.99999999999997 52.21038270929611, -39.99999999999993
51.451383473748834)}
Here is a test with the NetTopologySuite:
var polygonText = "POLYGON((-40 60, -40 30, -20 30, -20 60, -40 60))";
string lineText = "LINESTRING(-73.935242 40.730610, 2.349014 48.864716)";
var rdr = new NetTopologySuite.IO.WKTReader();
var geometryPolygon = rdr.Read(polygonText);
var geometryLine = rdr.Read(lineText);
var polygon = _geometryFactory.CreatePolygon(geometryPolygon.Coordinates);
var line = _geometryFactory.CreateLineString(geometryLine.Coordinates);
var intersects = line.Intersection(polygon);
{LINESTRING (-40 44.34908739019244, -20 46.48166531033365)}
Any idea why there is such a large difference in the result?

The problem is that NetTopologySuite only performs planar 2d geometry operations.
SqlServer's geography type performs geodesic operations within computation of distances and/or other spatial operations.
If you use SqlServer's geometry type you should be getting -more or less- the same results.

UPDATE:
The question is still not very clear, using SqlServerTypes and SQL server when checking for intersects I get a MultiLineString geography type. But question suggests it's a LineString
This is what I get both in SQL Server and SqlServerTypes
MULTILINESTRING ((2.349014 48.864716, -19.99999999999997 52.21038270929611), (-39.99999999999993 51.451383473748834, -73.935242 40.73061))
Reversing the Polygon in SQL.
The original Polygon seems wrong to me, as in the Question OP says it's a Polygon in the Atlantic Ocean - but in fact it is a Polygon covering the entire Earth except a Polygon in the Atlantic Ocean.
DECLARE #polygon geography, #rvrPolygon geography;
SET #polygon = geography::STPolyFromText('POLYGON((-40 60, -20 60, -20 30, -40 30, -40 60))', 4326);
SET #rvrPolygon = #polygon.ReorientObject()
DECLARE #lineString geography;
SET #lineString = geography::STLineFromText('LINESTRING(-73.935242 40.730610, 2.349014 48.864716)', 4326);
Select
rvrPol = #rvrPolygon,
--#polygon.ToString(),
--#lineString.ToString(),
polygon = #polygon,
linestring = #lineString,
Intersections = #polygon.STIntersection(#lineString).ToString(),
IntersectsRvrPoly = #rvrPolygon.STIntersection(#lineString).ToString()
Without reversing the Polygon it gives me (matches the SqlServerTypes):
MULTILINESTRING ((2.349014 48.864716, -19.999999999999972 52.210382709296113), (-39.999999999999929 51.451383473748834, -73.935242 40.73061))
With the reverse polygon it gives me:
LINESTRING (-19.999999999999972 52.210382709296113, -39.999999999999929 51.451383473748834)
Without the correct data and steps to replicate it's impossible to replicate in NetTopolgy to come to a conclusion as to if this is an actual bug, data issue or config issue.
Both of your WKTs are different, not sure why you are expecting the same results when the test cases are not identical.
If you fire up SQL Management Studio and Run the following Query you will understand how and why they are different.
DECLARE #polygon geography;
SET #polygon = geography::STPolyFromText('POLYGON((-40 60, -20 60, -20 30, -40 30, -40 60))', 4326);
DECLARE #lineString geography;
SET #lineString = geography::STLineFromText('LINESTRING(-73.935242 40.730610, 2.349014 48.864716)', 4326);
DECLARE #polygon1 geography;
SET #polygon1 = geography::STPolyFromText('POLYGON ((60 -40, 60 -20, 30 -20, 30 -40, 60 -40))', 4326);
DECLARE #lineString1 geography;
SET #lineString1 = geography::STLineFromText('LINESTRING (40.730610 -73.935242, 48.864716 2.349014)', 4326);
Select
#polygon.ToString(),
#lineString.ToString(),
p = #polygon,
l = #lineString,
Intt = #polygon.STIntersection(#lineString),
Intersections = #polygon.STIntersection(#lineString).ToString()
UNION ALL
Select
#polygon1.ToString(),
#lineString1.ToString(),
p = #polygon1,
l = #lineString1,
Intt = #polygon1.STIntersection(#lineString1),
Intersections = #polygon1.STIntersection(#lineString1).ToString()
If you analyze the Spatial Results Tab and select the p columns.
#polygon is all the green section except the one I've crossed out. So that will be the entire world except that particular Polygon.
#polygon1 is the orange polygon - I've highlighted in Red.
Similarly the LineStrings are different aswell.
So obviously their intersections will be different.One is a line string, the other is a Multi-Line string.
WKT is a global standard and should be the same for both NetTopology, SQL, Microsoft.SqlServer.Types etc. Wiki
If you have another example where the WKTs are identical and still the result is not correct happy to take a look and update the answer, but at this point of time the test cases are not identical.

Related

Cross-platform solution to do geography calculations

I'm working on migrating a .Net framework application to .Net Core and I need to support running on Linux.
The application needs to calculate the intersection of polygons and very long lines on the Earths surface, and so it uses Geography objects as apposed to Geometry to take into account the Earth's elliptical shape.
For this we use Microsoft.SqlServer.Types, which lets us do the following:
// Line from New York to Paris
SqlGeography line = SqlGeography.STGeomFromText(new System.Data.SqlTypes.SqlChars("LINESTRING(40.730610 -73.935242, 48.864716 2.349014)"), 4326);
// Polygon in the Atlantic
SqlGeography polygon = SqlGeography.STGeomFromText(new System.Data.SqlTypes.SqlChars("POLYGON((60 -40, 60 -20, 30 -20, 30 -40, 60 -40))"), 4326);
// Contains the two locations where the line intersects with the polygon
SqlGeography intersection = line.STIntersection(polygon);
The problem is that Microsoft.SqlServer.Types only works on Windows. How can I get the same result in a way that will also compile and run on Linux?
I've looked into NetTopologySuite but it seems to only support geometry calculations
Not sure if you are using EFCore or the NetTopology Suite for handling geography data in your project - but all these are already supported in that package.
Microsoft
Nuget
As you are using the WellKnownText format you can use the Docs from here:
Github Docs
I can't post the source code of my solution but an example of how to use it would be:
public class GeographyHelper
{
private static GeometryFactory _geometryFactory
{
get
{
return NetTopologySuite.NtsGeometryServices.Instance.CreateGeometryFactory(4326);
}
}
public bool TestIntersectsAPolygon(double latitude, double longitude)
{
var wellKnownText = "YOUR POLYGON WKT";
var point = _geometryFactory.CreatePoint(new Coordinate(longitude, latitude));
var rdr = new NetTopologySuite.IO.WKTReader();
var geometry = rdr.Read(wellKnownText);
var polygon = _geometryFactory.CreatePolygon(geometry.Coordinates);
var intersects = polygon.Intersects(point);
return intersects ;
}
}

Get colors between two color HEX refs or RGB [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Color Interpolation Between 3 Colors in .NET
I have been trying to get a categories list of colors using C#:
Red:
255, 69, 0
255, 99, 71
etc..
Green:
0, 250, 154
143, 188, 139
etc...
So far I have been pretty unsuccessful. Ideally, what I'd like is a way to supply two HEX refs or RGB refs and get a list back of say 10 colors between those two references. Is this possible in C#?
EDIT
Found this... http://meyerweb.com/eric/tools/color-blend/ just converting the js to c# now. Will post when its done.
I am not aware of built-in function which would help you, but you can do it yourself.
As long as color can be defined using 3 numbers (R,G,B), you can take two colors:
(R1,G1,B1)
(R2,G2,B2)
Then divide diff between pairs and produce numbers by intervals.
int numberOfIntervals = 10; //or change to whatever you want.
var interval_R = (R2 - R1) / numberOfIntervals;
var interval_G = (G2 - G1) / numberOfIntervals;
var interval_B = (B2 - B1) / numberOfIntervals;
var current_R = R1;
var current_G = G1;
var current_B = B1;
for (var i = 0; i <= numberOfIntervals; i++)
{
var color = Color.FromRGB(current_R, current_G, current_B);
//do something with color.
//increment.
current_R += interval_R;
current_G += interval_G;
current_B += interval_B;
}
I haven't compiled the code, but you get the idea.
What you are looking for is called interpolation. In this particular scenario you need to interpolate data between two key points.
Since interpolation is a really common scenario when programming, I wrote a generic solution for it, easily allowing you to interpolate between two or more key points, using either linear or even cardinal spline interpolation.
Using my library you could calculate intermediate colors as follows:
var keyPoints = new CumulativeKeyPointCollection<Color, double>(
new ColorInterpolationProvider() );
keyPoints.Add( Color.FromArgb(0, 250, 154) );
keyPoints.Add( Color.FromArgb(143, 188, 139) );
var linear = new LinearInterpolation<Color, double>( keyPoints );
// E.g. to get a color halfway the two other colors.
Color colorHalfway = linear.Interpolate( 0.5 );
You would have to implement ColorInterpolationProvider by extending from AbstractInterpolationProvider<Color, double>, but this is quite straightforward, and more information can be found in my blog post.
This example uses the Media.Color class, but you could just as well support any other Color class by passing along a different interpolation provider.

Using matrices to normalize transformed images in C#

Consider the two images below (original and transformed respectively). The three blue squares (markers) are used for orientation.
Original Image:
We know the width, height
We know the (x,y) coordinates of all three markers.
Transformed Image:
We can detect the (x,y) coordinates of all three markers.
As a result, we can calculate the angle of rotation, the amount of (x,y) translation and the (x,y) scaling factor.
I now want to use the System.Drawing.Graphics object to perform RotateTransform, TranslateTransform and ScaleTransform. The trouble is, the resulting image is NEVER like the original.
I've been told on stack overflow that the order of applying transformations does not matter but my observation is different. Below is some code that generates an original image and attempts to draw it on a new canvas after introducing some transformations. You can change the order of the transformations to see different results.
public static void GenerateImages ()
{
int width = 200;
int height = 200;
string filename = "";
System.Drawing.Bitmap original = null; // Original image.
System.Drawing.Bitmap transformed = null; // Transformed image.
System.Drawing.Graphics graphics = null; // Drawing context.
// Generate original image.
original = new System.Drawing.Bitmap(width, height);
graphics = System.Drawing.Graphics.FromImage(original);
graphics.Clear(System.Drawing.Color.MintCream);
graphics.DrawRectangle(System.Drawing.Pens.Red, 0, 0, original.Width - 1, original.Height - 1);
graphics.FillRectangle(System.Drawing.Brushes.Blue, 10, 10, 20, 20);
graphics.FillRectangle(System.Drawing.Brushes.Blue, original.Width - 31, 10, 20, 20);
graphics.FillRectangle(System.Drawing.Brushes.Blue, original.Width - 31, original.Height - 31, 20, 20);
filename = System.IO.Path.Combine(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location), "Original.png");
original.Save(filename, System.Drawing.Imaging.ImageFormat.Png);
graphics.Dispose();
// Generate transformed images.
transformed = new System.Drawing.Bitmap(width, height);
graphics = System.Drawing.Graphics.FromImage(transformed);
graphics.Clear(System.Drawing.Color.LightBlue);
graphics.ScaleTransform(0.5F, 0.7F); // Add arbitrary transformation.
graphics.RotateTransform(8); // Add arbitrary transformation.
graphics.TranslateTransform(100, 50); // Add arbitrary transformation.
graphics.DrawImage(original, 0, 0);
filename = System.IO.Path.Combine(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location), "Transformed.png");
transformed.Save(filename, System.Drawing.Imaging.ImageFormat.Png);
graphics.Dispose();
transformed.Dispose();
original.Dispose();
System.Diagnostics.Process.Start(filename);
}
I can see two potential issues here:
Since the transformations are being applies one after another, they render the originally calculated values useless.
The graphics object applies rotation at the (0, 0) coordinate where as I should be doing something different. Not sure what.
From what I understand from here, here, and here, the Graphics.Drawing transformations are performed by multiplying matrices together in the order in which you apply the transformations.
With integers, a*b*c = b*a*c
However, with matricies, ABC almost never equals BAC.
So, it appears the order of transformations does matter, since matrix multiplication is not commutative.
Put another way, it seems that if I do the following on your picture:
case 1:
translate (100,50)
scale (0.5,0.7)
picture ends up with top-left corner at: (100,50)
and bottom-right corner at: (200,190)
case 2:
scale (0.5,0.7)
translate (100,50)
picture ends up with top-left corner at: (50,35)
and bottom-right corner at: (150,174)
This means that by scaling first, and then translating, that the scaling will also scale the amount of translation, that is why in case two the the picture ended up at (50,35) for the top left corner, half of the translated X and .7 of the translated Y.

How to draw scout/reference lines in dicom

I am a beginner to dicom development group . I need to create a localizer image line on dicom image . So, is there any good ideas . Any Geeks .
David Brabant put you already in the right direction (if you want to work with DICOM you should definitely read and treasure dclunie's medical image FAQ). Let's see if I can elaborate on it and make it easier for you to implement.
I assume you have a tool/library to extract tags from a DICOM file (Offis' DCMTK?). For the sake of exemplification I'll refer to a CT scan (many slices, i.e. many images) and a scout image, onto which you want to display localizer lines. Each DICOM image, including your CT slices and your scout, contain full information about their location in space, in these two tags:
Group,Elem VR Value Name of the tag
---------------------------------------------------------------------
(0020,0032) DS [-249.51172\-417.51172\-821] # ImagePositionPatient
X0 Y0 Z0
(0020,0037) DS [1\0\0\0\1\0] # ImageOrientationPatient
A B C D E F
ImagePositionPatient has the global coordinates in mm of the first pixel transmitted (the top left-hand corner pixel, to be clear) expressed as (x,y,z). I marked them X0, Y0, Z0. ImageOrientationPatient contains two vectors, both of three components, specifying the direction cosines of the first row of pixels and first column of pixels of the image. Understanding direction cosines doesn't hurt (see e.g. http://mathworld.wolfram.com/DirectionCosine.html), but the method suggested by dclunie works directly with them, so for now let's just say they give you the orientation in space of the image plane. I marked them A-F to make formulas easier.
Now, in the code given by dclunie (I believe it's intended to be C, but it's so simple it should work as well as Java, C#, awk, Vala, Octave, etc.) the conventions are the following:
scr_* = refers to the soruce image, i.e. the CT slice
dst_* = refers to the destination image, i.e. the scout
*_pos_x, *_pos_y, *_pos_z = the X0, Y0, Z0 above
*_row_dircos_x, *_row_dircos_y, *_row_dircos_z = the A, B, C above
*_col_dircos_x, *_col_dircos_y, *_col_dircos_z = the D, E, F above
After setting the right values just apply these:
dst_nrm_dircos_x = dst_row_dircos_y * dst_col_dircos_z
- dst_row_dircos_z * dst_col_dircos_y;
dst_nrm_dircos_y = dst_row_dircos_z * dst_col_dircos_x
- dst_row_dircos_x * dst_col_dircos_z;
dst_nrm_dircos_z = dst_row_dircos_x * dst_col_dircos_y
- dst_row_dircos_y * dst_col_dircos_x;
src_pos_x -= dst_pos_x;
src_pos_y -= dst_pos_y;
src_pos_z -= dst_pos_z;
dst_pos_x = dst_row_dircos_x * src_pos_x
+ dst_row_dircos_y * src_pos_y
+ dst_row_dircos_z * src_pos_z;
dst_pos_y = dst_col_dircos_x * src_pos_x
+ dst_col_dircos_y * src_pos_y
+ dst_col_dircos_z * src_pos_z;
dst_pos_z = dst_nrm_dircos_x * src_pos_x
+ dst_nrm_dircos_y * src_pos_y
+ dst_nrm_dircos_z * src_pos_z;
Or, if you have some fancy matrix class, you can build this matrix and multiply it with your point coordinates.
[ dst_row_dircos_x dst_row_dircos_y dst_row_dircos_z -dst_pos_x ]
M = [ dst_col_dircos_x dst_col_dircos_y dst_col_dircos_z -dst_pos_y ]
[ dst_nrm_dircos_x dst_nrm_dircos_y dst_nrm_dircos_z -dst_pos_z ]
[ 0 0 0 1 ]
That would be like this:
Scout_Point(x,y,z,1) = M * CT_Point(x,y,z,1)
Said all that, which points of the CT should we convert to create a line on the scout? Also for this dclunie already suggests a general solution:
"My approach is to project the square that is the bounding box of the source image (i.e. lines joining the TLHC, TRHC,BRHC and BLHC of the slice)."
If you project the four corner points of the CT slice, you'll have a line for CT slices perpendicular to the scout, and a trapezoid in case of non perpendicular slices. Now, if your CT slice is aligned with the coordinate axes (i.e. ImageOrientationPatient = [1\0\0\0\1\0]), the four points are trivial. You compute the width/height of the image in mm using the number of rows/columns and the pixel distance along x/y direction and sum things up appropriately. If you want to implement the generic case, then you need a little trigonometry... or maybe not. It's maybe time you read the definition of the direction cosines if you haven't yet.
I'll try to put you on track. E.g. working on the TRHC, you know where the voxel is in the image plane:
# Pixel location of the TRHC
x_pixel = number_of_columns-1 # Counting from 0
y_pixel = 0
z_pixel = 0 # We're on a plane!
The pixel distance values in DICOM are referred to the image plane, so you can simply multiply x and y by those values to have their position in mm, while z is 0 (both pixels and mm). I am talking about these values:
(0028,0011) US 512 # 2, 1 Columns
(0028,0010) US 512 # 2, 1 Rows
(0028,0030) DS [0.9765625\0.9765625] # 20, 2 PixelSpacing
The matrix M above, is a generic transformation from global to image coordinates, having the direction cosines available. What you need now is something that does the inverse job (image to global) and on the source images (the CT slices). I'll let you go and dig in the geometry books to be sure, but I think it should be something like this (the rotation part is transposed, translation has no sign change and of course we use the src_* values):
[src_row_dircos_x src_col_dircos_x src_nrm_dircos_x src_pos_x ]
M2 = [src_row_dircos_y src_col_dircos_y src_nrm_dircos_y src_pos_y ]
[src_row_dircos_z src_col_dircos_z src_nrm_dircos_z src_pos_z ]
[0 0 0 1 ]
Convert points in the CT slice (e.g. the four corners) to millimeters and then apply M2 to have them in global coordinates. Then you can feed them to the procedure reported by dclunie. Cross-check my maths before using it e.g. for patient diagnostics! ;-)
Hope this helps understanding better dclunie's method. Cheers

how to get nearest point on a SqlGeometry object from another SqlGeometry object?

I have a set of line and polygon object (SqlGeometry type) and a point object (SqlGeometry type). How can we find the the nearest point on each line from the given point object? Are there any API for doing this operation?
Here a sample presenting possible solution using SqlGeometry and C#, no SQL Server is required:
using System;
using Microsoft.SqlServer.Types;
namespace MySqlGeometryTest
{
class ReportNearestPointTest
{
static void ReportNearestPoint(string wktPoint, string wktGeom)
{
SqlGeometry point = SqlGeometry.Parse(wktPoint);
SqlGeometry geom = SqlGeometry.Parse(wktGeom);
double distance = point.STDistance(geom).Value;
SqlGeometry pointBuffer = point.STBuffer(distance);
SqlGeometry pointResult = pointBuffer.STIntersection(geom);
string wktResult = new string(pointResult.STAsText().Value);
Console.WriteLine(wktResult);
}
static void Main(string[] args)
{
ReportNearestPoint("POINT(10 10)", "MULTIPOINT (80 70, 20 20, 200 170, 140 120)");
ReportNearestPoint("POINT(110 200)", "LINESTRING (90 80, 160 150, 300 150, 340 150, 340 240)");
ReportNearestPoint("POINT(0 0)", "POLYGON((10 20, 10 10, 20 10, 20 20, 10 20))");
ReportNearestPoint("POINT(70 170)", "POLYGON ((110 230, 80 160, 20 160, 20 20, 200 20, 200 160, 140 160, 110 230))");
}
}
}
The program output:
POINT (20 20)
POINT (160 150)
POINT (10 10)
POINT (70 160)
I'm not sure if this is possible directly in SQL Server 2008:
http://social.msdn.microsoft.com/Forums/en/sqlspatial/thread/cb094fb8-07ba-4219-8d3d-572874c271b5
The workaround suggested in that thread is:
declare #g geometry = 'LINESTRING(0 0, 10 10)'
declare #h geometry = 'POINT(0 10)'
select #h.STBuffer(#h.STDistance(#g)).STIntersection(#g).ToString()
Otherwise you would have to write a script to read the geometry from your database and use separate spatial libraries.
If you are interested in actually finding the nearest point on the line (otherwise called a node) you can turn each line into a set of points with the same lineid. Then query for the closest and calc the distance.
If instead you are trying to calc the distance from a point to the nearest line - stdistance
http://msdn.microsoft.com/en-us/library/bb933808.aspx
I guess the problem that the other answer addresses is what to put in your where clause though you could use stdistance to specify a distance above which you don't care such as
Where pointGeom.stdistance(lineGeom) < "distance you care about"

Categories

Resources