EmguCV C# : FindContours() to detect different shapes - c#

I have this image :
What I try to do is detecting the contours of it. So with looking to the documentation and some code on the web I made this :
Image<Gray, byte> image = receivedImage.Convert<Gray, byte>().ThresholdBinary(new Gray(80), new Gray(255));
Emgu.CV.Util.VectorOfVectorOfPoint contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(image, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(receivedImage, contours, 0, new MCvScalar(255, 0, 0), 2);
Then it detects this contour in blue :
Now I would like to detect both rectangles in differents contours. So the result would be this :
(made with paint) So now I would like to detect the two rectangles separetely, (the blue and red rectangles would be two different contours). But I have no idea about how to do that !
Thanks in advance for your help ! ;)

The problem comes from the process of ThresholdBinary. As I assume you understand, this method will return a binary image, whereby all pixels above or equal to the threshold parameter, will be pulled up to maxValue parameter, and all those below, pulled down to 0. The produced image will consist therefore of only two values (binary), 0 or maxValue. If we follow your example with some assumed gray values:
After Image<Gray, byte> image = receivedImage.Convert<Gray, byte>().ThresholdBinary(new Gray(80), new Gray(255));, you will produce:
This is in fact the image that you are passing to CvInvoke.FindContours() and subsequently finding only the out most contour.
What you need, if indeed you want to continue with FindContours, is an algorithm that will "bin", or "bandpass" your image to first produce the following segments, each to be converted to binary, and contour detected independently.
I feel that you currently example is probably and over simplification of the problem to offer you a solution on how you might achieve that here. However please do ask another question with more realistic data, and I will be happy to provide some suggestions.
Alternatively look towards more sophisticated edge detection methods such and Canny or Sobel. This video may be a good starting point: Edge Detection

Related

(EMGU) How do I split and merge an Image?

I am working in C# on Visual Studio with Emgu.
I am doing a several image manipulations on a large image. I had the idea of splitting the image in half, doing the manipulations in parallel, them merging the image.
In pursuit of this goal, I have found a number of questions regarding the acquisition of rectangular parts of images for processing as well as splitting an image into channels (RGB, HSV, etc). I have not found a question that addresses the task of taking an image, and making it into two images. I have also not found a question that addresses taking two images and tacking them together.
The following code is what I would like to do, where split and merge are imaginary methods to accomplish it.
Image<Bgr,Byte> ogImage = new Image<Bgr, byte>(request.image);
Image<Bgr,Byte> topHalf = new Image<Bgr, byte>();
Image<Bgr,Byte> bottomHalf = new Image<Bgr, byte>();
ogImage.splitHorizonally(topHalf,bottomHalf);
//operations
ogImage = topHalf.merge(bottomHalf);
This is the type of question I hate asking, because it is simple and you would think it has a simple, easily available solution, but I have not found it, or I have found it and not understood it.
There are a number of ways to solve this but here is what I did. I took the easiest way out ;-)
Mat lena = new Mat(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg",
ImreadModes.Unchanged);
CvInvoke.Imshow("Lena", lena);
System.Drawing.Rectangle topRect = new Rectangle(0,
0,
lena.Width,
(lena.Height / 2));
System.Drawing.Rectangle bottomRect = new Rectangle(0,
(lena.Height / 2),
lena.Width,
(lena.Height / 2));
Mat lenaTop = new Mat(lena, topRect);
CvInvoke.Imshow("Lena Top", lenaTop);
Mat lenaBottom = new Mat(lena, bottomRect);
CvInvoke.Imshow("Lena Bottom", lenaBottom);
Mat newLena = new Mat();
CvInvoke.VConcat(lenaBottom, lenaTop, newLena);
CvInvoke.Imshow("New Lena", newLena);
CvInvoke.WaitKey(0);
Original Lena
Lena Top Half
Lena Bottom Half
The New Lena Rearranged
Your goal isn't splitting an image. Your goal is to parallelize some operation on the image.
You did not disclose the specific operations you need to perform. That is important to know however, if you want to parallelize those operations.
You need to learn about strategies for parallelization in general. Commonly, a "kernel" is executed on several partitions of the data in parallel.
One practical approach is called OpenMP. You apply "pragmas" to your own loops and OpenMP spreads those loop iterations across different threads.

Some Background remain after Background Subtraction EMGU CV

I am a newbie at EmguCV image processing and trying different methods of background subtraction. I came across at the method absdiff and gave it a try but after a bunch of processing, some part of the object seems to be transparent and the background behind it can be seen,Background subtraction sample
here is the part of my code that processes the image
img = _capture.QueryFrame().ToImage<Bgr, Byte>();
Mat smoothedFrame = new Mat();
CvInvoke.GaussianBlur(img, smoothedFrame, new Size(3, 3), 1);
img3 = img2gray.AbsDiff(smoothedFrame.ToImage<Gray, Byte>());//.Convert<Gray, Byte>());
img3 = img3.ThresholdBinary(new Gray(60), new Gray(255));
IbOriginal.Image = img;
IbProcessed.Image = img3;
How can i remove those "blank or hollow" space in the image above. Any help would be much appreciated
I'm guessing you want to create a mask with pixels of the truck only. You may have taken away pixels in the hollow spaces with
ThresholdBinary(new Gray(60), new Gray(255));
Decreasing the lower threshold might be what you need but it might include some background noise too. You can always identify the location of the truck first with a higher threshold (what you've done here) then ThresholdBinary on a previous image in the ROI identified.
ThresholdBinary(new Gray(10), new Gray(255));
Or you can try CvInvoke.FloodFill

emgu c# object tracking moments

well iam trying to make an object tracker i produced the filtered image which is tracking the object and convert it to white i used this to get the filtered image
CvInvoke.cvInRangeS(HSVimg, low, high, THImg);
now iam trying to get the contours and get the center point so i used this (can't test it yet)
using (Image<Gray, Byte> canny = smoothedRedMask.Canny(100.0, 50.0))
using (MemStorage stor = new MemStorage())
{
Contour<Point> contours = canny.FindContours(
Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE,
stor);
}
so i have two questions what does canny method do ?
how do i draw a shape around the tracked object then get the center point using moment or any other method ?
u don't have to write code just give me reference to simple code that i can use
The Canny function is the implementation of an edge detection algorithm ,it uses a multi-stage algorithm to detect a wide range of edges in images.
refer to this wikipedia article or this tutorial/code to understand better.
The other part of the question is a bit more tricky as drawing a shape around the tracked object will depend on the quality of the image received after applying the canny edge detection and also the geometry of the object.
So you might want to adjust values of the canny function to suit your needs.
but you can refer to these youtube video tutorials to better understand/code your object tracking logic.
Video 1
Video 2.
Hoping it helps.

Template matching usink mask in opencv (emgu)

I would like to find a piece of an image inside another image. However, I have some regions pixels in both images that I don't want to take into account. So I was thinking of using some type of mask with zeros or ones to indicate the good pixels.
I am using the MatchTemplate method from emgu and it does not accept a mask. Is there any other way of doing what I would like to do? Thank you!
ReferenceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCORR_NORMED);
I thought of a solution. Asuming that referenceImageMask and templateMask have 1s in the good pixels and 0s in the bad ones. And that referenceImage and templateImage have already been masked and have 0s in the bad pixels as well.
Then, the first result of template matching will give the not normalized cross correlation between the images.
The second template matching will give for each possible offset the number of pixels that were at the same time different from zero (unmasked) in both images.
Then, normalizing the correlation by that number should give the value I wanted. The average product for the pixels that are not masked in both images.
Image<Gray, float> imCorr = referenceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCORR);
Image<Gray, float> imCorrMask = referenceImageMask.MatchTemplate(templateMask, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCORR);
imCorr = imCorr .Mul(imCorrMask .Pow(-1));
Today you could use this method:
CvInvoke.MatchTemplate(actualImage, expectedImage, result, TemplateMatchingType.CcoeffNormed, mask);

Template matching - how to ignore pixels

I'm trying to find a digit within an image. To test my code I took an image of the digit and then used AForge's Exhaustive Template Matching algorithm to search for it in another image. But I think there is a problem in that the digit is obviously not rectangular whereas the image that contains it is. That means that there are a lot of pixels participating in the comparison which shouldn't be. Is there any way to make this comparison while ignoring those pixels? If not in AForge then maybe EMGU/OpenCV or Octave?
Here's my code:
Grayscale gray = new GrayscaleRMY();
Bitmap template = (Bitmap)Bitmap.FromFile(#"5template.png");
template = gray.Apply(template);
Bitmap image = (Bitmap)Bitmap.FromFile(filePath);
Bitmap sourceImage = gray.Apply(image);
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.7f);
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
As mentioned above in the comment, you should preprocess your data to improve matching.
The first thing that comes to mind is morphological opening (erode then dilate) to reduce the background noise
Read in your image and invert it so that your character vectors are white:
Apply opening with smallest possible structuring element/window (3x3):
You could try slightly larger structuring elements (5x5):
Invert it back to the original:
See if that helps!

Categories

Resources