I want to implement (a c# program) the system in this paper IPSM It uses tensor field to design street network. For my implementation, my priority is to generate my own street network from my own tensor field. I don't want something too advanced at first. The paper said that tensor lines (major and minor eigenvector) will represent the streets.
Does anyone have any ideas where should I start to look at (how can I draw those lines inside a 2D grid). There are some references inside the paper such tensor field visualization paper but I can't stop turning inside a loop looking one reference to another one.
Regards.
I'm going to assume that it's the drawing part you need help with. C# has a number of drawing capabilities that make it pretty easy to draw stuff like this. GDI+ (the graphics/drawing package contained in System.Drawing) has built-in support for 2D transformations, so we can create a bitmap and then draw on it using arbitrary coordinate systems. You can also leverage the existing Vector class in the System.Windows namespace to make vector math simpler.
First, the namespaces and assemblies you'll need:
using System;
// Needs reference to System.Drawing to use GDI+ for drawing
using System.Drawing;
using System.Drawing.Imaging;
// Needs reference to WindowBase to use Vector class
using Vector = System.Windows.Vector;
The following example just draws a 10x10 grid of vectors. The output looks like this. The code will run just fine inside of a console application (i.e. no user interface). You could also modify the code to generate the bitmap and display in a Windows Forms application via a picture box or some other UI element. The console version, though, is dead simple and easy to play around with:
// Define the size of our viewport using arbitary world coordinates
var viewportSize = new SizeF(10, 10);
// Create a new bitmap image that is 500 by 500 pixels
using (var bmp = new Bitmap(500, 500, PixelFormat.Format32bppPArgb))
{
// Create graphics object to draw on the bitmap
using (var g = Graphics.FromImage(bmp))
{
// Set up transformation so that drawing calls automatically convert world coordinates into bitmap coordinates
g.TranslateTransform(0, bmp.Height * 0.5f - 1);
g.ScaleTransform(bmp.Width / viewportSize.Width, -bmp.Height / viewportSize.Height);
g.TranslateTransform(0, -viewportSize.Height * 0.5f);
// Create pen object for drawing with
using (var redPen = new Pen(Color.Red, 0.01f)) // Note that line thickness is in world coordinates!
{
// Randomization
var rand = new Random();
// Draw a 10x10 grid of vectors
var a = new Vector();
for (a.X = 0.5; a.X < 10.0; a.X += 1.0)
{
for (a.Y = 0.5; a.Y < 10.0; a.Y += 1.0)
{
// Connect the center of this cell to a random point inside the cell
var offset = new Vector(rand.NextDouble() - 0.5, rand.NextDouble() - 0.5);
var b = a + offset;
g.DrawLine(redPen, a.ToPointF(), b.ToPointF());
}
}
}
}
// Save the bitmap and display it
string filename = System.IO.Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments),
"c#test.png");
bmp.Save(filename, ImageFormat.Png);
System.Diagnostics.Process.Start(filename);
}
You are going to need to do quite a lot of work to develop a system like they have. You first step will be to draw the flow lines of a vector field. There is a lot of literature on the topic because it is a big area. I would recommend getting a book on the subject rather than trying to work with papers which are always missing on the nitty gritty details.
Once you have a framework which can do streamlines, you can move onto the other parts of the algorithm. To simplify the algorithm I would look at the section on height-maps. If you could generate a height-map over the whole domain then you define one of the vectors as the gradient and draw some stream lines from that vector field.
This might be a good way to get a fairly simple working system. Their full algorithm is really quite involved. I would say you would need about a month of work to implement their whole algorithm.
Related
I use AForge.Net for find blobs in bitmap, my bitmap is as follows:
My problem is that AForge.Net detects only one blob when in fact there are two connected blobs on a thin line.
My question is there an algorithm that identifies that there are two large blobs with thin connection between them? And how I implement this algorithm in C# or VB?
Image for samples:
As others suggested, I would use OpenCv instead of AForge (it seems AForge has not been updated for a while plus OpenCv has lots of samples available).
With C#, I suggest the OpenCvSharp nuget package. It's easy to use because the code really looks like C++ or python code, like most samples.
So, OpenCv has a blob detector, but it detects blob centers, so in your case, it seems you're more after contours than blobs (which is often the case).
Luckily, with OpenCv and your sample image, it just works w/o doing anything fancy (we don't even have to erode the image first), we can just use findContours, filter some glitches, and get the convexHull. Here is a sample code that demonstrates that:
using (var src = new Mat(filePath))
using (var gray = new Mat())
{
using (var bw = src.CvtColor(ColorConversionCodes.BGR2GRAY)) // convert to grayscale
{
// invert b&w (specific to your white on black image)
Cv2.BitwiseNot(bw, gray);
}
// find all contours
var contours = gray.FindContoursAsArray(RetrievalModes.List, ContourApproximationModes.ApproxSimple);
using (var dst = src.Clone())
{
foreach (var contour in contours)
{
// filter small contours by their area
var area = Cv2.ContourArea(contour);
if (area < 15 * 15) // a rect of 15x15, or whatever you see fit
continue;
// also filter the whole image contour (by 1% close to the real area), there may be smarter ways...
if (Math.Abs((area - (src.Width * src.Height)) / area) < 0.01f)
continue;
var hull = Cv2.ConvexHull(contour);
Cv2.Polylines(dst, new[] { hull }, true, Scalar.Red, 2);
}
using (new Window("src image", src))
using (new Window("dst image", dst))
{
Cv2.WaitKey();
}
}
}
One quick solution would be to apply the opening operator
http://www.aforgenet.com/framework/features/morphology_filters.html
If the maximum thickness of the line is known in advance, one could apply the erosion operator multiple times and then apply the dilation operator the same number of times, effectively removing the thin line. This will change the shape of the 2 blobs, however.
If something more sophisticated is required, you might want to follow the approach in this, which combines the distance transform with the watershed algorithm:
https://docs.opencv.org/3.1.0/d3/db4/tutorial_py_watershed.html
Try Erosion Class , it can clear up thin line in the center.
http://www.aforgenet.com/framework/docs/html/90a69d73-0e5a-3e27-cc52-5864f542b53e.htm
Call Dilatation Class , get original size,
http://www.aforgenet.com/framework/docs/html/88f713d4-a469-30d2-dc57-5ceb33210723.htm
and find blobs again , you will get it.
Maybe you want to use OpenCV for your project. It's more easier and faster.
Nuget:
https://www.nuget.org/packages/OpenCvSharp3-AnyCPU/3.3.1.20171117
Mat im = Cv2.ImRead("blob.jpg", ImreadModes.GrayScale);
SimpleBlobDetector detector = SimpleBlobDetector.Create();
KeyPoint[] points = detector.Detect(im);
Mat result = new Mat();
Cv2.DrawKeypoints(im, points, result, Scalar.Red);
Do you have any idea how Graphics object using resources?
I am drawing several thousands GraphicsPath objects with latitude longitude coordinate on a Panel. Initially those Graphicspaths has to be zoomed (transformed - 4 matrix transformation actually). Then user can move the map around, zooming, with each action called for repainting the graphics path.
The problem is the whole thing is still responsive when zoom level is around 2000-10000, but when it gets to hundreds of thousands (which is the street level zoom) it takes too long to paint and cause the whole application unresponsive. Check the free memory, still plenty of them. CPU usage is still OK.
How come drawing the same thousands of Graphics Path, with the same 4 matrix transformation each becomes extremely slow when zoom factor were increased? Is the problem in the System.Graphics itself when handling Graphics Path coordinate with large number? Do you guys ever face the same problem?
Sorry good people, for not including the code: so here is chunk of the "slow" code: basicaly the iteration part of _paint method. it runs over 30,000 Graphics path, most are polylines extracted from esri shp files. the coordinates for x are + and y are - and fliped upside down so hence the required matrix transform to be painted on panel. The problem is at low value variable zI, it's much faster than the hi-value variable zI. Hi-value zi means much of the graphics path is outside the painted area. I try to reduce the amount of zi by checking isVisible or by interecting rectangle bound. but that still not fast enough. any ideas?
foreach (GraphicsPath vectorDraw in currentShape.vectorPath)
{
GraphicsPath paintPath = (GraphicsPath)vectorDraw.Clone();
OperationMatrix = new Matrix();
OperationMatrix.Translate(-DisplayPort.X, -DisplayPort.Y);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix();
OperationMatrix.Scale(zI, zI);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix(1, 0, 0, -1, 0, DisplaySize.Height);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix();
OperationMatrix.Translate(ClientGap.Width, -ClientGap.Height);
paintPath.Transform(OperationMatrix);
//if (WiredPort.IsVisible(paintPath.GetBounds())) //Futile attempt
//{
Pen LandBoundariesPen = new Pen(Color.FromArgb(255, 225, 219, 127));
GraphContext.DrawPath(LandBoundariesPen, paintPath); // this is the slowest part. When commented it goes faster.
pathCountX++;
}
Help .... :)
For high performance rendering, directX is perferred over WPF. You can also consider using opengl in C#.
Edit: For tutorial on how to use Open GL in C# via the TAO framework, visit below link:
http://xinyustudio.wordpress.com/2008/12/01/using-opengl-in-c-taoframework/
i am wondering about what exactly a blob is? Is it is possible to reduce background noises in the image? Or is it possible to find largest region in and image, more epecifically if an image contains hand and head segments only then is it possible to separete hand or head regions only?? If this is possible then it is also possible to select boundary having larger contours, while eliminating small patches in the image ??
Suggest me, i have an image containing hand gesture only. I used skin
detection technique to do so. But the problem is i have small other
noises in the image that have same color as hand(SKIN). I want typical
hand gestures only, with removed noises. Help me??
Using the example from aforge, any reason you can't just clear the small bits from your image?
// create an instance of blob counter algorithm
BlobCounterBase bc = new ...
// set filtering options
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
// process binary image
bc.ProcessImage( image );
Blob[] blobs = bc.GetObjects( image, false );
// process blobs
var rectanglesToClear = from blob in blobs select blob.Rectangle;
using (var gfx = Graphics.FromImage(image))
{
foreach(var rect in rectanglesToClear)
{
if (rect.Height < someMaxHeight && rect.Width < someMaxWidth)
gfx.FillRectangle(Brushes.Black, rect);
}
gfx.Flush();
}
Have a look at morphological opening: this performs and erosion followed by a dilation and essentially removes areas of foreground/background smaller than a "structuring element," the size (and shape) of which you can specify.
I don't know aforge, but in Matlab the reference is here and in OpenCV see here.
The application I am working on currently requires functionality for Perspective Image Distortion. Basically what I want to do is to allow users to load an image into the application and adjust its perspective view properties based on 4 corner points that they can specify.
I had a look at ImageMagic. It has some distort functions with perpective adjustment but is very slow and some certain inputs are giving incorrect outputs.
Any of you guys used any other library or algorithm. I am coding in C#.
Any pointers would be much appreciated.
Thanks
This seems to be exactly what you (and I) were looking for:
http://www.codeproject.com/KB/graphics/YLScsFreeTransform.aspx
It will take an image and distort it using 4 X/Y coordinates you provide.
Fast, free, simple code. Tested and it works beautifully. Simply download the code from the link, then use FreeTransform.cs like this:
using (System.Drawing.Bitmap sourceImg = new System.Drawing.Bitmap(#"c:\image.jpg"))
{
YLScsDrawing.Imaging.Filters.FreeTransform filter = new YLScsDrawing.Imaging.Filters.FreeTransform();
filter.Bitmap = sourceImg;
// assign FourCorners (the four X/Y coords) of the new perspective shape
filter.FourCorners = new System.Drawing.PointF[] { new System.Drawing.PointF(0, 0), new System.Drawing.PointF(300, 50), new System.Drawing.PointF(300, 411), new System.Drawing.PointF(0, 461)};
filter.IsBilinearInterpolation = true; // optional for higher quality
using (System.Drawing.Bitmap perspectiveImg = filter.Bitmap)
{
// perspectiveImg contains your completed image. save the image or do whatever.
}
}
Paint .NET can do this and there are also custom implementations of the effect. You could ask for the source code or use Reflector to read it and get an idea of how to code it.
If it is a perspective transform, you should be able to specify a 4x4 transformation matrix that matches the four corners.
Calculate that matrix, then apply each pixel on the resulting image on the matrix, resulting in the "mapped" pixel. Notice that this "mapped" pixel is very likely going to lie between two or even four pixels. In this case, use your favorite interpolation algorithm (e.g. bilinear, bicubic) to get the interpolated color.
This really is the only way for it to be done and cannot be done faster. If this feature is crucial and you absolutely need it to be fast, then you'll need to offload the task to a GPU. For example, you can call upon the DirectX library to apply a perspective transformation on a texture. That can make it extremely fast, even when there is no GPU because the DirectX library uses SIMD instructions to accelerate matrix calculations and color interpolations.
Had the same problem. Here is the demo code with sources ported from gimp.
YLScsFreeTransform doesn't work as expected. Way better solution is ImageMagic
Here is how you use it in c#:
using(MagickImage image = new MagickImage("test.jpg"))
{
image.Distort(DistortMethod.Perspective, new double[] { x0,y0, newX0,newY0, x1,y1,newX1,newY1, x2,y2,newX2,newY2, x3,y3,newX3,newY3 });
control.Image = image.ToBitmap();
}
I have an dynamic List of Point, new Point can be added at any time. I want to draw lines to connect them using different color. Color is based on the index of those points. Here is the code:
private List<Point> _points;
private static Pen pen1 = new Pen(Color.Red, 10);
private static Pen pen2 = new Pen(Color.Yellow, 10);
private static Pen pen3 = new Pen(Color.Blue, 10);
private static Pen pen4 = new Pen(Color.Green, 10);
private void Init()
{
// use fixed 80 for simpicity
_points = new List<Point>(80);
for (int i = 0; i < 80; i++)
{
_points.Add(new Point(30 + i * 10, 30));
}
}
private void DrawLinesNormal(PaintEventArgs e)
{
for (int i = 0; i < _points.Count-1; i++)
{
if (i < 20)
e.Graphics.DrawLine(pen1, _points[i], _points[i + 1]);
else if (i < 40)
e.Graphics.DrawLine(pen2, _points[i], _points[i + 1]);
else if (i < 60)
e.Graphics.DrawLine(pen3, _points[i], _points[i + 1]);
else
e.Graphics.DrawLine(pen4, _points[i], _points[i + 1]);
}
}
I find this method is not fast enough when I have new points coming in at a high speed. Is there any way to make it faster? I did some research and someone said using GraphicsPath could be faster, but how?
[UPDATE] I collect some possible optimizations:
Using GrahpicsPath, Original Question
Change Graphics quality ( such as SmoothingMode/PixelOffsetMode...), also call SetClip to specify the only necessary region to render.
You won't be able to squeeze much more speed out of that code without losing quality or changing to a faster renderer (GDI, OpenGL, DirectX). But GDI will often be quite a bit faster (maybe 2x), and DirectX/OpenGL can be much faster (maybe 10x), depending on what you're drawing.
The idea of using a Path is that you batch many (in your example, 20) lines into a single method call, rather than calling DrawLine 20 times. This will only benefit you if you can arrange the incoming data into the correct list-of-points format for the drawing routine. Otherwise, you will have to copy the points into the correct data structure and this will waste a lot of the time that you are gaining by batching into a path. In the case of DrawPath, you may have to create a GraphicsPath from an array of points, which may result in no time saved. But if you have to draw the same path more than once, you can cache it, and you may then see a net benefit.
If new points are added to the list, but old ones are not removed (i.e. you are always just adding new lines to the display) then you would be able to use an offscreen bitmap to store the lines rendered so far. That way each time a point is added, you draw one line, rather than drawing all 80 lines every time.
It all depends on exactly what you're trying to do.
Doesn't really help to improve performance, but i would put the pens also into a list and writing all this lines in this way:
int ratio = _points.Count / _pens.Count;
for (int i = 0; i < _points.Count - 1; i++)
{
e.Graphics.DrawLine(_pens[i / ratio], _points[i], _points[i + 1]);
}
This is about as fast as you're going to get with System.Drawing. You might see a bit of gain using Graphics.DrawLines(), but you'd need to format your data differently to get the advantage of drawing a bunch of lines at once with the same pen. I seriously doubt GraphicsPath will be faster.
One sure way to improve speed is to reduce the quality of the output. Set Graphics.InterpolationMode to InterpolationMode.Low, Graphics.CompositingQuality to CompositingQuality.HighSpeed, Graphics.SmoothingMode to SmoothingMode.HighSpeed, Graphics.PixelOffsetMode to PixelOffsetMode.HighSpeed and Graphics.CompositingMode to CompositingMode.SourceCopy.
I remember a speed test once where someone compared Graphics to P/Invoke into GDI routines, and was quite surprised by the much faster P/Invoke speeds. You might check that out. I'll see if I can find that comparison... Apparently this was for the Compact Framework, so it likely doesn't hold for a PC.
The other way to go is to use Direct2D, which can be faster yet than GDI, if you have the right hardware.
Too late, but possibly somebody still need a solution.
I've created small library GLGDI+ with similiar (but not full/equal) GDI+ syntax, which run upon OpenTK: http://code.google.com/p/glgdiplus/
I'm not sure about stability, it has some issues with DrawString (problem with TextPrint from OpenTK). But if you need performance boost for your utility (like level editor in my case) it can be solution.
You might wanna look into the Brush object, and it's true that you won't get near real-time performance out of a GDI+ program, but you can easily maintain a decent fps as long as the geometry and number of objects stay within reasonable bounds. As for line drawing, I don't see why not.
But if you reach the point where you doing what you think is optimal, and all that is, drawing lines.. you should consider a different graphics stack, and if you like .NET but have issues with unmanaged APIs like OpenGL and DirectX, go with WPF or Silverlight, it's quite powerful.
Anyway, you could try setting up a System.Drawing.Drawing2D.GraphicsPath and then using a System.Drawing.Drawing2D.PathGradientBrush to a apply the colors this way. That's a single buffered draw call and if you can't get enough performance out of that. You'll have to go with something other entirely than GDI+
Not GDI(+) at all, but a completely different way to tackle this could be to work with a block of memory, draw your lines into there, convert it to a Bitmap object to instantly paint where you need to show your lines.
Of course this hinges in the extreme on fast ways to
draw lines of given color in the memory representation of choice and
convert that to the Bitmap to show.
Not in the .NET Framework, I think, but perhaps in a third party library? Isn't there a bitmap writer of sorts in Silverlight for stuff like this? (Not into Silverlight myself that much yet...)
At least it might be an out of the box way to approach this. Hope it helps.
I think you have to dispose pen object and e.Graphics object after drawing.
One more thing it is better if you write your drawLine code inside onPaint().
just override onPaint() method it support better drawing and fast too.