Some Background remain after Background Subtraction EMGU CV - c#

I am a newbie at EmguCV image processing and trying different methods of background subtraction. I came across at the method absdiff and gave it a try but after a bunch of processing, some part of the object seems to be transparent and the background behind it can be seen,Background subtraction sample
here is the part of my code that processes the image
img = _capture.QueryFrame().ToImage<Bgr, Byte>();
Mat smoothedFrame = new Mat();
CvInvoke.GaussianBlur(img, smoothedFrame, new Size(3, 3), 1);
img3 = img2gray.AbsDiff(smoothedFrame.ToImage<Gray, Byte>());//.Convert<Gray, Byte>());
img3 = img3.ThresholdBinary(new Gray(60), new Gray(255));
IbOriginal.Image = img;
IbProcessed.Image = img3;
How can i remove those "blank or hollow" space in the image above. Any help would be much appreciated

I'm guessing you want to create a mask with pixels of the truck only. You may have taken away pixels in the hollow spaces with
ThresholdBinary(new Gray(60), new Gray(255));
Decreasing the lower threshold might be what you need but it might include some background noise too. You can always identify the location of the truck first with a higher threshold (what you've done here) then ThresholdBinary on a previous image in the ROI identified.
ThresholdBinary(new Gray(10), new Gray(255));
Or you can try CvInvoke.FloodFill

Related

EmguCV C# : FindContours() to detect different shapes

I have this image :
What I try to do is detecting the contours of it. So with looking to the documentation and some code on the web I made this :
Image<Gray, byte> image = receivedImage.Convert<Gray, byte>().ThresholdBinary(new Gray(80), new Gray(255));
Emgu.CV.Util.VectorOfVectorOfPoint contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(image, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(receivedImage, contours, 0, new MCvScalar(255, 0, 0), 2);
Then it detects this contour in blue :
Now I would like to detect both rectangles in differents contours. So the result would be this :
(made with paint) So now I would like to detect the two rectangles separetely, (the blue and red rectangles would be two different contours). But I have no idea about how to do that !
Thanks in advance for your help ! ;)
The problem comes from the process of ThresholdBinary. As I assume you understand, this method will return a binary image, whereby all pixels above or equal to the threshold parameter, will be pulled up to maxValue parameter, and all those below, pulled down to 0. The produced image will consist therefore of only two values (binary), 0 or maxValue. If we follow your example with some assumed gray values:
After Image<Gray, byte> image = receivedImage.Convert<Gray, byte>().ThresholdBinary(new Gray(80), new Gray(255));, you will produce:
This is in fact the image that you are passing to CvInvoke.FindContours() and subsequently finding only the out most contour.
What you need, if indeed you want to continue with FindContours, is an algorithm that will "bin", or "bandpass" your image to first produce the following segments, each to be converted to binary, and contour detected independently.
I feel that you currently example is probably and over simplification of the problem to offer you a solution on how you might achieve that here. However please do ask another question with more realistic data, and I will be happy to provide some suggestions.
Alternatively look towards more sophisticated edge detection methods such and Canny or Sobel. This video may be a good starting point: Edge Detection

(EMGU) How do I split and merge an Image?

I am working in C# on Visual Studio with Emgu.
I am doing a several image manipulations on a large image. I had the idea of splitting the image in half, doing the manipulations in parallel, them merging the image.
In pursuit of this goal, I have found a number of questions regarding the acquisition of rectangular parts of images for processing as well as splitting an image into channels (RGB, HSV, etc). I have not found a question that addresses the task of taking an image, and making it into two images. I have also not found a question that addresses taking two images and tacking them together.
The following code is what I would like to do, where split and merge are imaginary methods to accomplish it.
Image<Bgr,Byte> ogImage = new Image<Bgr, byte>(request.image);
Image<Bgr,Byte> topHalf = new Image<Bgr, byte>();
Image<Bgr,Byte> bottomHalf = new Image<Bgr, byte>();
ogImage.splitHorizonally(topHalf,bottomHalf);
//operations
ogImage = topHalf.merge(bottomHalf);
This is the type of question I hate asking, because it is simple and you would think it has a simple, easily available solution, but I have not found it, or I have found it and not understood it.
There are a number of ways to solve this but here is what I did. I took the easiest way out ;-)
Mat lena = new Mat(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg",
ImreadModes.Unchanged);
CvInvoke.Imshow("Lena", lena);
System.Drawing.Rectangle topRect = new Rectangle(0,
0,
lena.Width,
(lena.Height / 2));
System.Drawing.Rectangle bottomRect = new Rectangle(0,
(lena.Height / 2),
lena.Width,
(lena.Height / 2));
Mat lenaTop = new Mat(lena, topRect);
CvInvoke.Imshow("Lena Top", lenaTop);
Mat lenaBottom = new Mat(lena, bottomRect);
CvInvoke.Imshow("Lena Bottom", lenaBottom);
Mat newLena = new Mat();
CvInvoke.VConcat(lenaBottom, lenaTop, newLena);
CvInvoke.Imshow("New Lena", newLena);
CvInvoke.WaitKey(0);
Original Lena
Lena Top Half
Lena Bottom Half
The New Lena Rearranged
Your goal isn't splitting an image. Your goal is to parallelize some operation on the image.
You did not disclose the specific operations you need to perform. That is important to know however, if you want to parallelize those operations.
You need to learn about strategies for parallelization in general. Commonly, a "kernel" is executed on several partitions of the data in parallel.
One practical approach is called OpenMP. You apply "pragmas" to your own loops and OpenMP spreads those loop iterations across different threads.

Template matching - how to ignore pixels

I'm trying to find a digit within an image. To test my code I took an image of the digit and then used AForge's Exhaustive Template Matching algorithm to search for it in another image. But I think there is a problem in that the digit is obviously not rectangular whereas the image that contains it is. That means that there are a lot of pixels participating in the comparison which shouldn't be. Is there any way to make this comparison while ignoring those pixels? If not in AForge then maybe EMGU/OpenCV or Octave?
Here's my code:
Grayscale gray = new GrayscaleRMY();
Bitmap template = (Bitmap)Bitmap.FromFile(#"5template.png");
template = gray.Apply(template);
Bitmap image = (Bitmap)Bitmap.FromFile(filePath);
Bitmap sourceImage = gray.Apply(image);
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.7f);
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
As mentioned above in the comment, you should preprocess your data to improve matching.
The first thing that comes to mind is morphological opening (erode then dilate) to reduce the background noise
Read in your image and invert it so that your character vectors are white:
Apply opening with smallest possible structuring element/window (3x3):
You could try slightly larger structuring elements (5x5):
Invert it back to the original:
See if that helps!

Working with zoomed GraphicsPath is really slow

Do you have any idea how Graphics object using resources?
I am drawing several thousands GraphicsPath objects with latitude longitude coordinate on a Panel. Initially those Graphicspaths has to be zoomed (transformed - 4 matrix transformation actually). Then user can move the map around, zooming, with each action called for repainting the graphics path.
The problem is the whole thing is still responsive when zoom level is around 2000-10000, but when it gets to hundreds of thousands (which is the street level zoom) it takes too long to paint and cause the whole application unresponsive. Check the free memory, still plenty of them. CPU usage is still OK.
How come drawing the same thousands of Graphics Path, with the same 4 matrix transformation each becomes extremely slow when zoom factor were increased? Is the problem in the System.Graphics itself when handling Graphics Path coordinate with large number? Do you guys ever face the same problem?
Sorry good people, for not including the code: so here is chunk of the "slow" code: basicaly the iteration part of _paint method. it runs over 30,000 Graphics path, most are polylines extracted from esri shp files. the coordinates for x are + and y are - and fliped upside down so hence the required matrix transform to be painted on panel. The problem is at low value variable zI, it's much faster than the hi-value variable zI. Hi-value zi means much of the graphics path is outside the painted area. I try to reduce the amount of zi by checking isVisible or by interecting rectangle bound. but that still not fast enough. any ideas?
foreach (GraphicsPath vectorDraw in currentShape.vectorPath)
{
GraphicsPath paintPath = (GraphicsPath)vectorDraw.Clone();
OperationMatrix = new Matrix();
OperationMatrix.Translate(-DisplayPort.X, -DisplayPort.Y);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix();
OperationMatrix.Scale(zI, zI);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix(1, 0, 0, -1, 0, DisplaySize.Height);
paintPath.Transform(OperationMatrix);
OperationMatrix = new Matrix();
OperationMatrix.Translate(ClientGap.Width, -ClientGap.Height);
paintPath.Transform(OperationMatrix);
//if (WiredPort.IsVisible(paintPath.GetBounds())) //Futile attempt
//{
Pen LandBoundariesPen = new Pen(Color.FromArgb(255, 225, 219, 127));
GraphContext.DrawPath(LandBoundariesPen, paintPath); // this is the slowest part. When commented it goes faster.
pathCountX++;
}
Help .... :)
For high performance rendering, directX is perferred over WPF. You can also consider using opengl in C#.
Edit: For tutorial on how to use Open GL in C# via the TAO framework, visit below link:
http://xinyustudio.wordpress.com/2008/12/01/using-opengl-in-c-taoframework/

Is it possible to remove small patches of an image using Blob analysis?

i am wondering about what exactly a blob is? Is it is possible to reduce background noises in the image? Or is it possible to find largest region in and image, more epecifically if an image contains hand and head segments only then is it possible to separete hand or head regions only?? If this is possible then it is also possible to select boundary having larger contours, while eliminating small patches in the image ??
Suggest me, i have an image containing hand gesture only. I used skin
detection technique to do so. But the problem is i have small other
noises in the image that have same color as hand(SKIN). I want typical
hand gestures only, with removed noises. Help me??
Using the example from aforge, any reason you can't just clear the small bits from your image?
// create an instance of blob counter algorithm
BlobCounterBase bc = new ...
// set filtering options
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
// process binary image
bc.ProcessImage( image );
Blob[] blobs = bc.GetObjects( image, false );
// process blobs
var rectanglesToClear = from blob in blobs select blob.Rectangle;
using (var gfx = Graphics.FromImage(image))
{
foreach(var rect in rectanglesToClear)
{
if (rect.Height < someMaxHeight && rect.Width < someMaxWidth)
gfx.FillRectangle(Brushes.Black, rect);
}
gfx.Flush();
}
Have a look at morphological opening: this performs and erosion followed by a dilation and essentially removes areas of foreground/background smaller than a "structuring element," the size (and shape) of which you can specify.
I don't know aforge, but in Matlab the reference is here and in OpenCV see here.

Categories

Resources