I use AForge.Net for find blobs in bitmap, my bitmap is as follows:
My problem is that AForge.Net detects only one blob when in fact there are two connected blobs on a thin line.
My question is there an algorithm that identifies that there are two large blobs with thin connection between them? And how I implement this algorithm in C# or VB?
Image for samples:
As others suggested, I would use OpenCv instead of AForge (it seems AForge has not been updated for a while plus OpenCv has lots of samples available).
With C#, I suggest the OpenCvSharp nuget package. It's easy to use because the code really looks like C++ or python code, like most samples.
So, OpenCv has a blob detector, but it detects blob centers, so in your case, it seems you're more after contours than blobs (which is often the case).
Luckily, with OpenCv and your sample image, it just works w/o doing anything fancy (we don't even have to erode the image first), we can just use findContours, filter some glitches, and get the convexHull. Here is a sample code that demonstrates that:
using (var src = new Mat(filePath))
using (var gray = new Mat())
{
using (var bw = src.CvtColor(ColorConversionCodes.BGR2GRAY)) // convert to grayscale
{
// invert b&w (specific to your white on black image)
Cv2.BitwiseNot(bw, gray);
}
// find all contours
var contours = gray.FindContoursAsArray(RetrievalModes.List, ContourApproximationModes.ApproxSimple);
using (var dst = src.Clone())
{
foreach (var contour in contours)
{
// filter small contours by their area
var area = Cv2.ContourArea(contour);
if (area < 15 * 15) // a rect of 15x15, or whatever you see fit
continue;
// also filter the whole image contour (by 1% close to the real area), there may be smarter ways...
if (Math.Abs((area - (src.Width * src.Height)) / area) < 0.01f)
continue;
var hull = Cv2.ConvexHull(contour);
Cv2.Polylines(dst, new[] { hull }, true, Scalar.Red, 2);
}
using (new Window("src image", src))
using (new Window("dst image", dst))
{
Cv2.WaitKey();
}
}
}
One quick solution would be to apply the opening operator
http://www.aforgenet.com/framework/features/morphology_filters.html
If the maximum thickness of the line is known in advance, one could apply the erosion operator multiple times and then apply the dilation operator the same number of times, effectively removing the thin line. This will change the shape of the 2 blobs, however.
If something more sophisticated is required, you might want to follow the approach in this, which combines the distance transform with the watershed algorithm:
https://docs.opencv.org/3.1.0/d3/db4/tutorial_py_watershed.html
Try Erosion Class , it can clear up thin line in the center.
http://www.aforgenet.com/framework/docs/html/90a69d73-0e5a-3e27-cc52-5864f542b53e.htm
Call Dilatation Class , get original size,
http://www.aforgenet.com/framework/docs/html/88f713d4-a469-30d2-dc57-5ceb33210723.htm
and find blobs again , you will get it.
Maybe you want to use OpenCV for your project. It's more easier and faster.
Nuget:
https://www.nuget.org/packages/OpenCvSharp3-AnyCPU/3.3.1.20171117
Mat im = Cv2.ImRead("blob.jpg", ImreadModes.GrayScale);
SimpleBlobDetector detector = SimpleBlobDetector.Create();
KeyPoint[] points = detector.Detect(im);
Mat result = new Mat();
Cv2.DrawKeypoints(im, points, result, Scalar.Red);
Related
I want to implement (a c# program) the system in this paper IPSM It uses tensor field to design street network. For my implementation, my priority is to generate my own street network from my own tensor field. I don't want something too advanced at first. The paper said that tensor lines (major and minor eigenvector) will represent the streets.
Does anyone have any ideas where should I start to look at (how can I draw those lines inside a 2D grid). There are some references inside the paper such tensor field visualization paper but I can't stop turning inside a loop looking one reference to another one.
Regards.
I'm going to assume that it's the drawing part you need help with. C# has a number of drawing capabilities that make it pretty easy to draw stuff like this. GDI+ (the graphics/drawing package contained in System.Drawing) has built-in support for 2D transformations, so we can create a bitmap and then draw on it using arbitrary coordinate systems. You can also leverage the existing Vector class in the System.Windows namespace to make vector math simpler.
First, the namespaces and assemblies you'll need:
using System;
// Needs reference to System.Drawing to use GDI+ for drawing
using System.Drawing;
using System.Drawing.Imaging;
// Needs reference to WindowBase to use Vector class
using Vector = System.Windows.Vector;
The following example just draws a 10x10 grid of vectors. The output looks like this. The code will run just fine inside of a console application (i.e. no user interface). You could also modify the code to generate the bitmap and display in a Windows Forms application via a picture box or some other UI element. The console version, though, is dead simple and easy to play around with:
// Define the size of our viewport using arbitary world coordinates
var viewportSize = new SizeF(10, 10);
// Create a new bitmap image that is 500 by 500 pixels
using (var bmp = new Bitmap(500, 500, PixelFormat.Format32bppPArgb))
{
// Create graphics object to draw on the bitmap
using (var g = Graphics.FromImage(bmp))
{
// Set up transformation so that drawing calls automatically convert world coordinates into bitmap coordinates
g.TranslateTransform(0, bmp.Height * 0.5f - 1);
g.ScaleTransform(bmp.Width / viewportSize.Width, -bmp.Height / viewportSize.Height);
g.TranslateTransform(0, -viewportSize.Height * 0.5f);
// Create pen object for drawing with
using (var redPen = new Pen(Color.Red, 0.01f)) // Note that line thickness is in world coordinates!
{
// Randomization
var rand = new Random();
// Draw a 10x10 grid of vectors
var a = new Vector();
for (a.X = 0.5; a.X < 10.0; a.X += 1.0)
{
for (a.Y = 0.5; a.Y < 10.0; a.Y += 1.0)
{
// Connect the center of this cell to a random point inside the cell
var offset = new Vector(rand.NextDouble() - 0.5, rand.NextDouble() - 0.5);
var b = a + offset;
g.DrawLine(redPen, a.ToPointF(), b.ToPointF());
}
}
}
}
// Save the bitmap and display it
string filename = System.IO.Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments),
"c#test.png");
bmp.Save(filename, ImageFormat.Png);
System.Diagnostics.Process.Start(filename);
}
You are going to need to do quite a lot of work to develop a system like they have. You first step will be to draw the flow lines of a vector field. There is a lot of literature on the topic because it is a big area. I would recommend getting a book on the subject rather than trying to work with papers which are always missing on the nitty gritty details.
Once you have a framework which can do streamlines, you can move onto the other parts of the algorithm. To simplify the algorithm I would look at the section on height-maps. If you could generate a height-map over the whole domain then you define one of the vectors as the gradient and draw some stream lines from that vector field.
This might be a good way to get a fairly simple working system. Their full algorithm is really quite involved. I would say you would need about a month of work to implement their whole algorithm.
I've been look ing for a method for connected component labeling in Emgu (c# wrapper for OpenCV). I've failed to find a direct method for such a basic CV strategy. However, I did come across many suggestions for doing it using FindContours and DrawContours but without code examples. So I had a go at it and it seems to work okay.
I'm dropping it here for two reasons.
So people searching for it can find a code example.
More importantly, i'm wondering if there are suggestions for optimization and improvements of this function. E.g. is the chain approximation method for FindContours efficient/appropriate?
public static Image<Gray, byte> LabelConnectedComponents(this Image<Gray, byte> binary, int startLabel)
{
Contour<Point> contours = binary.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE,
RETR_TYPE.CV_RETR_CCOMP);
int count = startLabel;
for (Contour<Point> cont = contours;
cont != null;
cont = cont.HNext)
{
CvInvoke.cvDrawContours(
binary,
cont,
new MCvScalar(count),
new MCvScalar(0),
2,
-1,
LINE_TYPE.FOUR_CONNECTED,
new Point(0, 0));
++count;
}
return binary;
}
I would use the following:
connected component labeling (AForge / Accord.NET ?)
(although for many cases you will find it almost the same for the function that you wrote, give it a try to verify the results)
After this step you will probably find more regions that are close together and that belong to the same person.
Than you can use:
implement or search for implementation of hierarhical aglomerative clustering (HCA)
to combine close region.
P.S.:
is the chain approximation method for FindContours efficient/appropriate
if you are using NO_APPROX no approximations of chain code will be used. By using this you can get non-smoothed edges (with many small hills and valleys) but if that does not bother you than this parameter value is fine.
I'm trying to find a digit within an image. To test my code I took an image of the digit and then used AForge's Exhaustive Template Matching algorithm to search for it in another image. But I think there is a problem in that the digit is obviously not rectangular whereas the image that contains it is. That means that there are a lot of pixels participating in the comparison which shouldn't be. Is there any way to make this comparison while ignoring those pixels? If not in AForge then maybe EMGU/OpenCV or Octave?
Here's my code:
Grayscale gray = new GrayscaleRMY();
Bitmap template = (Bitmap)Bitmap.FromFile(#"5template.png");
template = gray.Apply(template);
Bitmap image = (Bitmap)Bitmap.FromFile(filePath);
Bitmap sourceImage = gray.Apply(image);
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.7f);
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
As mentioned above in the comment, you should preprocess your data to improve matching.
The first thing that comes to mind is morphological opening (erode then dilate) to reduce the background noise
Read in your image and invert it so that your character vectors are white:
Apply opening with smallest possible structuring element/window (3x3):
You could try slightly larger structuring elements (5x5):
Invert it back to the original:
See if that helps!
i am wondering about what exactly a blob is? Is it is possible to reduce background noises in the image? Or is it possible to find largest region in and image, more epecifically if an image contains hand and head segments only then is it possible to separete hand or head regions only?? If this is possible then it is also possible to select boundary having larger contours, while eliminating small patches in the image ??
Suggest me, i have an image containing hand gesture only. I used skin
detection technique to do so. But the problem is i have small other
noises in the image that have same color as hand(SKIN). I want typical
hand gestures only, with removed noises. Help me??
Using the example from aforge, any reason you can't just clear the small bits from your image?
// create an instance of blob counter algorithm
BlobCounterBase bc = new ...
// set filtering options
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
// process binary image
bc.ProcessImage( image );
Blob[] blobs = bc.GetObjects( image, false );
// process blobs
var rectanglesToClear = from blob in blobs select blob.Rectangle;
using (var gfx = Graphics.FromImage(image))
{
foreach(var rect in rectanglesToClear)
{
if (rect.Height < someMaxHeight && rect.Width < someMaxWidth)
gfx.FillRectangle(Brushes.Black, rect);
}
gfx.Flush();
}
Have a look at morphological opening: this performs and erosion followed by a dilation and essentially removes areas of foreground/background smaller than a "structuring element," the size (and shape) of which you can specify.
I don't know aforge, but in Matlab the reference is here and in OpenCV see here.
The application I am working on currently requires functionality for Perspective Image Distortion. Basically what I want to do is to allow users to load an image into the application and adjust its perspective view properties based on 4 corner points that they can specify.
I had a look at ImageMagic. It has some distort functions with perpective adjustment but is very slow and some certain inputs are giving incorrect outputs.
Any of you guys used any other library or algorithm. I am coding in C#.
Any pointers would be much appreciated.
Thanks
This seems to be exactly what you (and I) were looking for:
http://www.codeproject.com/KB/graphics/YLScsFreeTransform.aspx
It will take an image and distort it using 4 X/Y coordinates you provide.
Fast, free, simple code. Tested and it works beautifully. Simply download the code from the link, then use FreeTransform.cs like this:
using (System.Drawing.Bitmap sourceImg = new System.Drawing.Bitmap(#"c:\image.jpg"))
{
YLScsDrawing.Imaging.Filters.FreeTransform filter = new YLScsDrawing.Imaging.Filters.FreeTransform();
filter.Bitmap = sourceImg;
// assign FourCorners (the four X/Y coords) of the new perspective shape
filter.FourCorners = new System.Drawing.PointF[] { new System.Drawing.PointF(0, 0), new System.Drawing.PointF(300, 50), new System.Drawing.PointF(300, 411), new System.Drawing.PointF(0, 461)};
filter.IsBilinearInterpolation = true; // optional for higher quality
using (System.Drawing.Bitmap perspectiveImg = filter.Bitmap)
{
// perspectiveImg contains your completed image. save the image or do whatever.
}
}
Paint .NET can do this and there are also custom implementations of the effect. You could ask for the source code or use Reflector to read it and get an idea of how to code it.
If it is a perspective transform, you should be able to specify a 4x4 transformation matrix that matches the four corners.
Calculate that matrix, then apply each pixel on the resulting image on the matrix, resulting in the "mapped" pixel. Notice that this "mapped" pixel is very likely going to lie between two or even four pixels. In this case, use your favorite interpolation algorithm (e.g. bilinear, bicubic) to get the interpolated color.
This really is the only way for it to be done and cannot be done faster. If this feature is crucial and you absolutely need it to be fast, then you'll need to offload the task to a GPU. For example, you can call upon the DirectX library to apply a perspective transformation on a texture. That can make it extremely fast, even when there is no GPU because the DirectX library uses SIMD instructions to accelerate matrix calculations and color interpolations.
Had the same problem. Here is the demo code with sources ported from gimp.
YLScsFreeTransform doesn't work as expected. Way better solution is ImageMagic
Here is how you use it in c#:
using(MagickImage image = new MagickImage("test.jpg"))
{
image.Distort(DistortMethod.Perspective, new double[] { x0,y0, newX0,newY0, x1,y1,newX1,newY1, x2,y2,newX2,newY2, x3,y3,newX3,newY3 });
control.Image = image.ToBitmap();
}