I have a Kinect WPF Application that takes images from the Kinect, does some feature detection using EmguCV (A C# opencv wrapper) and displays the output on the using a WPF image.
I have had this working before, but the application now refuses to update the screen image when the imagesource is written to, but I have not changed the way it works.
the Image(called video) is written to as such:
video.Source = bitmapsource;
in the colorframeready event handler.
This works fine until I introduce some opencv code before the imagesource is written to. It does not matter what source is used, so I don't think it is a conflict there. I have narrowed down the offending EmguCV code to this line:
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
which jumps straight into the opencv code. It is worth noting that:
ImageRecent has completely different origins to the bitmapsource updating the screen.
Reading video.Source returns the bitmapsource, so it seems to be writing correctly, just not updating the screen.
Let me know if you want any more information...
void nui_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
// Checks for a recent Depth Image
if (!TrackingReady) return;
// Stores image
using (ColorImageFrame colorImageFrame = e.OpenColorImageFrame())
{
if (colorImageFrame != null)
{
if (FeatureTracker.ColourImageRecent == null)
//allocate the first time
FeatureTracker.ColourImageRecent = new byte[colorImageFrame.PixelDataLength];
colorImageFrame.CopyPixelDataTo(FeatureTracker.ColourImageRecent);
}
else return;
}
FeatureTracker.FeatureDetect(nui);
//video.Source = FeatureTracker.ColourImageRecent.ToBitmapSource();
video.Source = ((Bitmap)Bitmap.FromFile("test1.png")).ToBitmapSource();
TrackingReady = false;
}
public Bitmap FeatureDetect(KinectSensor nui)
{
byte[] ColourClone = new byte[ColourImageRecent.Length];
Array.Copy(ColourImageRecent, ColourClone, ColourImageRecent.Length);
Bitmap test = (Bitmap)Bitmap.FromFile("test1.png");
test.RotateFlip(RotateFlipType.RotateNoneFlipY);
Image<Gray, Byte> ImageRecent = new Image<Gray, byte>(test);
SURFDetector surfCPU = new SURFDetector(2000, false);
VectorOfKeyPoint RecentKeyPoints;
Matrix<int> indices;
Matrix<float> dist;
Matrix<byte> mask;
bool MatchFailed = false;
// extract SURF features from the object image
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
//Matrix<float> RecentDescriptors = surfCPU.ComputeDescriptorsRaw(ImageRecent, null, RecentKeyPoints);
//MKeyPoint[] RecentPoints = RecentKeyPoints.ToArray();
// don't feature detect on first attempt, just store image details for next attempt
#region
/*
if (KeyPointsOld == null)
{
KeyPointsOld = RecentKeyPoints;
PointsOld = RecentPoints;
DescriptorsOld = RecentDescriptors;
return ImageRecent.ToBitmap();
}
*/
#endregion
// Attempt to match points to their nearest neighbour
#region
/*
BruteForceMatcher SURFmatcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
SURFmatcher.Add(RecentDescriptors);
int k = 5;
indices = new Matrix<int>(DescriptorsOld.Rows, k);
dist = new Matrix<float>(DescriptorsOld.Rows, k);
*/
// Match features, provide the top k matches
//SURFmatcher.KnnMatch(DescriptorsOld, indices, dist, k, null);
// Create mask and set to allow all features
//mask = new Matrix<byte>(dist.Rows, 1);
//mask.SetValue(255);
#endregion
//Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
// Check number of good maches and for error and end matching if true
#region
//int nonZeroCount = CvInvoke.cvCountNonZero(mask);
//if (nonZeroCount < 5) MatchFailed = true;
/*
try
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(RecentKeyPoints, KeyPointsOld, indices, mask, 1.5, 20);
}
catch (SystemException)
{
MatchFailed = true;
}
if (nonZeroCount < 5) MatchFailed = true;
if (MatchFailed)
{
return ImageRecent.ToBitmap();
}
*/
#endregion
//DepthMapColourCoordsRecent = CreateDepthMap(nui, DepthImageRecent);
//PointDist[] FeatureDistances = DistanceToFeature(indices, mask, RecentPoints);
//Image<Rgb,Byte> rgbimage = ImageRecent.Convert<Rgb, Byte>();
//rgbimage = DrawPoints(FeatureDistances, rgbimage);
// Store recent image data for next feature detect.
//KeyPointsOld = RecentKeyPoints;
//PointsOld = RecentPoints;
//DescriptorsOld = RecentDescriptors;
//CreateDepthMap(nui, iva);
//rgbimage = CreateDepthImage(DepthMapColourCoordsRecent, rgbimage);
// Convert image back to a bitmap
count++;
//Bitmap bitmap3 = rgbimage.ToBitmap();
//bitmapstore = bitmap3;
//bitmap3.Save("test" + count.ToString() + ".png");
return null;
}
This is a little late, but I had a similar problem and thought I'd share my solution.
In my case I was processing the depth stream. The default resolution was 640x480, and Emgu just wasn't able to process the image fast enough to keep up with the frameready handler. As soon as I reduced the depth stream resolution to 320x240 the problem went away.
I also went a bit further and moved my image processing to a different thread which sped it up even more (do a search for ComponentDispatcher.ThreadIdle). I'm still not able to do 640x480 at a reasonable frame rate, but at least the image renders so I can see what's going on.
Related
As the title, i want to use locate three Position Pattern.
Example
I want to know how to get the x y position of those pattern when I got a new QR code from a webcamtexture.
How should I implement this is Unity(C#)?
Use the following code for decoding ZXing dll.
private WebCamTexture camTexture;
private Rect screenRect;
void Start()
{
screenRect = new Rect(0, 0, Screen.width, Screen.height);
camTexture = new WebCamTexture();
camTexture.requestedHeight = Screen.height;
camTexture.requestedWidth = Screen.width;
if (camTexture != null)
{
camTexture.Play();
}
}
void OnGUI()
{
// drawing the camera on screen
GUI.DrawTexture(screenRect, camTexture, ScaleMode.ScaleToFit);
// do the reading — you might want to attempt to read less often than you draw on the screen for performance sake
try
{
IBarcodeReader barcodeReader = new BarcodeReader();
// decode the current frame
var result = barcodeReader.Decode(camTexture.GetPixels32(), camTexture.width, camTexture.height);
if (result != null)
{
Debug.Log("DECODED TEXT FROM QR: " +result.Text);
}
ResultPoint[] point = result.ResultPoints;
Debug.Log("X: " + point[0].X + " Y: " + point[1].Y);
}
catch (Exception ex) { Debug.LogWarning(ex.Message); }
}
I had taken reference from ZXing dll link. It also has qr code generator in readme. Go through readme. Its almost same just ResultPoint[] point = result.ResultPoints; has been added to it. This gives the position of the 3 corners of image. Obviously you will need to add the ZXing.dll in plugins folder in the Assets.
Hope this helps to get the result.
i am trying to change the pictureBox.Image during Runtime. I have several Model classes with a picture stored, whenever i click on a MenuStripItem i call the method "ChangePictureBoxImages". Till then there is no error (the pB is invisible!) but once i call the method to make the pB visible i get an Error. The Error code: "An unhandled exception of type 'System.ArgumentException' occurred in System.Drawing.dll".
Research said i should dispose the picturebox and set it to "null", however this does NOT help.
My Code:
using (Image current = BitmapManipulator.EvaluateMesurement(CSV_Name1, max_Rows, max_Col, var.TopImage, var.BitmapToManipulate, pB_ColourScale_Evaluation.Image, var.BitmapToManipulate, var.Filepath, var.FoldID))
{
var.LastEvaluationImage = current;
BitmapManipulator.CombineImagesAndSaveThem_Evaluation(var.TopImage, var.BitmapToManipulate, pB_ColourScale_Evaluation.Image, var.Filepath, var.FoldID); //saves the Files as jpg
if (var.CurrentlyShownToUser) //checks if the MenuStripItem is the active one
{
if (var.LastEvaluationImage == null) { MessageBox.Show("the image is null");} //only for debugging purpose -> does never call
ChangePictureBoxImages();
}
}
and the ChangePictureBoxImages():
public void ChangePictureBoxImages()
{
foreach (Fold fold in FoldExisting)
{
if (fold.FoldID == LastSelectedMenuStripItem_Name) //the clicked item is the last Selected MenuStripItem
{
if (fold.LastEvaluationImage != null)
{
Debug.WriteLine(pB_Evaluation_Bottom.Image.ToString() + " " + fold.LastEvaluationImage.ToString());
pB_Evaluation_Bottom.Image = fold.LastEvaluationImage;
}
pB_Evaluation_Top.Image = fold.TopImage;
}
}
}
There is no error till then, the error appears once i call "pB_Evaluation_Bottom.visible = true". (or if i called the visible method first first the error appears upon changing the Image!) The error also appears upon clicking 2 times on the MenuStripItem. I load the picture from the Class Fold as following:
This will set an image in the fold class, this image will then be manipulated and stored in LastEvaluationImage
private void setTheImages(string PictureToManipulate, string PathToTopImage)
{
try
{
this.BitmapToManipulate_intern = (Image)Image.FromFile(#PictureToManipulate, true);
this.TopImage_intern = (Image)Image.FromFile(#PathToTopImage, true);
}
catch (ArgumentNullException ex)
{
Debug.WriteLine("The BitMap for the manipulation process and the top image is not created.");
}
}
and the LastEvaluationImage where the last picture is stored -> this will be called to be the new pb.Image
private Image LastEvaluationImage_intern;
public Image LastEvaluationImage
{
get
{
return this.LastEvaluationImage_intern;
}
set
{
if (LastEvaluationImage_intern != null) { LastEvaluationImage_intern.Dispose(); LastEvaluationImage_intern = null; }
this.LastEvaluationImage_intern = value;
this.LastEvaluationTime_intern = DateTime.Now;
}
}
I know this is a little complex, but i hope someone can help me.
THANKS IN ADVANCE!
UPDATE: The Error must be in the following Code:
The BitmapManipulator.EvaluateMeasurement Code :
public Image EvaluateMesurement(double[][] MeasuredValues, int max_Rows, int max_Col, Image pB_Evaluation_Top, Image pB_Evaluation_Bottom, Image pB_EvaluationColourScale, Image ManipulatedBitmap, string PathMeasurementFiles, string Foldname)
{
using (Bitmap bitmap = new Bitmap(ManipulatedBitmap))
{
// the data array sizes:
int number_nio = 0;
int number_total = 0;
List<FileInfo> LastFiles;
int got_number_for_trends = Properties.Settings.Default.TrendNumber;
SolidBrush myBrush = new SolidBrush(red);
using (Graphics g = Graphics.FromImage(bitmap))
{
Random rnd = new Random(8);
int[,] data = new int[max_Col, max_Rows];
// scale the tile size:
float sx = 1f * bitmap.Width / data.GetLength(0);
float sy = 1f * bitmap.Height / data.GetLength(1);
LastFiles = FM.GetLastFiles_Trend(ref got_number_for_trends, PathMeasurementFiles);
double[][] CSV_Statistiken = FM.LastFilesToCSV(got_number_for_trends, true, LastFiles, PathMeasurementFiles);
for (int x = 0; x < max_Col; x++)
{
for (int y = max_Rows - 1; y >= 0; y--)
{
number_total++;
RectangleF r = new RectangleF(x * sx, y * sy, sx, sy);
if (MeasuredValues[y][x] < Properties.Settings.Default.Threshhold)
{
number_nio++;
if (CSV_Statistiken[y][x] == Properties.Settings.Default.TrendNumber)
{
myBrush.Color = Color.FromArgb(150, black);
g.FillRectangle(myBrush, r);
}
else
{
myBrush.Color = Color.FromArgb(150, red);
g.FillRectangle(myBrush, r);
}
}
else
{
myBrush.Color = Color.FromArgb(150, green);
g.FillRectangle(myBrush, r);
}
}
}
}
return bitmap;
}
}
This returned bitmap will be stored in fold.LastEvaluationImage as following:
using (Image current = BitmapManipulator.EvaluateMesurement(CSV_Name1, max_Rows, max_Col, var.TopImage, var.BitmapToManipulate, pB_ColourScale_Evaluation.Image, var.BitmapToManipulate, var.Filepath, var.FoldID))
{
var.LastEvaluationImage = current;
}
You're returning a disposed bitmap. It shouldn't be surprising you can't draw something that no longer exists :)
The using (bitmap) is the last thing you want in this case. The bitmap must survive longer than the scope of the using. And the using (current) in the caller has the same problem - you're again disposing the image way too early. You can only dispose it when it's clear that it isn't going to be used ever again - e.g. when you replace it with a new image.
To elaborate, using does nothing but call Dispose when you leave its scope. In the case of Bitmap (which is just a "thin" wrapper around a GDI bitmap), this releases the memory where the actual image data is stored. There isn't anything interesting left, so there's nothing to draw (and you'd basically be calling DrawBitmap(NULL) as far as GDI is concerned).
I've been playing around with NGif Animator for resizing animated gifs and it does resize but parts in many animated gifs I've tried get erased. I looked through the comments on that page and didn't see anyone else mention it.
To eliminate resizing as the cause I simply loop through the frames and save each one. Each frame is a System.Drawing.Image. Transparency is set to none (Color.Empty).
This is my test method currently:
GifDecoder gifDecoder = new GifDecoder();
MemoryStream memoryStream = new MemoryStream();
new BinaryWriter((Stream)memoryStream).Write(imageToResize); // byte array
memoryStream.Position = 0L;
gifDecoder.Read((Stream)memoryStream);
memoryStream.Dispose();
string filename = Guid.NewGuid().ToString().Replace("-", String.Empty) + ".gif";
string output = path + #"\" + filename;
AnimatedGifEncoder animatedGifEncoder = new AnimatedGifEncoder();
animatedGifEncoder.Start(output);
animatedGifEncoder.SetRepeat(gifDecoder.GetLoopCount());
animatedGifEncoder.SetQuality(10); // They say 20 is max quality will get, I've tried higher. Makes it a little bit better but black areas instead of gray. 10 is their default.
animatedGifEncoder.SetTransparent(Color.Empty); // This is default either way
int frameCount = gifDecoder.GetFrameCount();
int num = 0;
Image frame;
Image image = null;
for (int index = frameCount; num < index; ++num)
{
frame = gifDecoder.GetFrame(num);
animatedGifEncoder.SetDelay(gifDecoder.GetDelay(num));
string fname = #"C:\Development\images\frame_" + num.ToString() + ".gif";
if (File.Exists(fname)) { File.Delete(fname); }
frame.Save(fname);
animatedGifEncoder.AddFrame(image);
}
animatedGifEncoder.Finish();
Here's an example of what's happening:
The background is gone and it's gray.
It's supposed to look like:
Anyone have experience with NGif and know what would cause this? The first frame is always fine. It's the others after that have a problem so I'm guessing something isn't being reset from frame to frame (or re-read). I've been adding more things to their reset frame method but so far it hasn't helped. That now looks like:
protected void ResetFrame()
{
lastDispose = dispose;
lastRect = new Rectangle(ix, iy, iw, ih);
lastImage = image;
lastBgColor = bgColor;
delay = 0;
transparency = false; // I don't want transparency
lct = null;
act = null;
transIndex = -1;
}
There is actually a bug in their code, a byte array not being reset. Check the comments on their page for a solution
I have a UISlider. It is used to navigate quickly through a PDF. Whenever the threshold for the next page is reached, I display a UIView next to the slider's knob that contains a small preview of the target page.
The slider code looks is below this (some parts stripped). If the next page is reached, a new preview is generated, Otherwise, the existing one is moved along the slider.
I get various effects:
If previewing many many pages, the app crashes at
MonoTouch.CoreGraphics.CGContext.Dispose (bool) <0x00047>
Oct 11 17:21:13 unknown UIKitApplication:com.brainloop.brainloopbrowser[0x1a2d][2951] : at MonoTouch.CoreGraphics.CGContext.Finalize () <0x0002f>
or if I remove the calls to Dispose() in the last method: [NSAutoreleasePool release]: This pool has already been released, do not drain it (double release).
From looking at the code, does somebody have an idea what's wrong? Or is the whole approach of using a thread wrong?
this.oScrollSlider = new UISlider ();
this.oScrollSlider.TouchDragInside += delegate( object oSender, EventArgs oArgs )
{
this.iCurrentPage = (int)Math.Round (oScrollSlider.Value);
if (this.iCurrentPage != this.iLastScrollSliderPage)
{
this.iLastScrollSliderPage = this.iCurrentPage;
this.RenderScrollPreviewImage(this.iCurrentPage);
}
};
this.oScrollSlider.ValueChanged += delegate
{
if (this.oScrollSliderPreview != null)
{
this.oScrollSliderPreview.RemoveFromSuperview ();
this.oScrollSliderPreview.Dispose();
this.oScrollSliderPreview = null;
}
// Go to the selected page.
};
The method creating the preview is spinning off a new thread. If the user changes pages whil the thread is still going, it is aborted and the next page is previewed:
private void RenderScrollPreviewImage (int iPage)
{
// Create a new preview view if not there.
if(this.oScrollSliderPreview == null)
{
SizeF oSize = new SizeF(150, 200);
RectangleF oFrame = new RectangleF(new PointF (this.View.Bounds.Width - oSize.Width - 50, this.GetScrollSliderOffset (oSize)), oSize);
this.oScrollSliderPreview = new UIView(oFrame);
this.oScrollSliderPreview.BackgroundColor = UIColor.White;
this.View.AddSubview(this.oScrollSliderPreview);
UIActivityIndicatorView oIndicator = new UIActivityIndicatorView(UIActivityIndicatorViewStyle.Gray);
oIndicator.Center = new PointF(this.oScrollSliderPreview.Bounds.Width/2, this.oScrollSliderPreview.Bounds.Height/2);
this.oScrollSliderPreview.AddSubview(oIndicator);
oIndicator.StartAnimating();
}
// Remove all subviews, except the activity indicator.
if(this.oScrollSliderPreview.Subviews.Length > 0)
{
foreach(UIView oSubview in this.oScrollSliderPreview.Subviews)
{
if(!(oSubview is UIActivityIndicatorView))
{
oSubview.RemoveFromSuperview();
}
}
}
// Kill the currently running thread that renders a preview.
if(this.oRenderScrollPreviewImagesThread != null)
{
try
{
this.oRenderScrollPreviewImagesThread.Abort();
}
catch(ThreadAbortException)
{
// Expected.
}
}
// Start a new rendering thread.
this.oRenderScrollPreviewImagesThread = new Thread (delegate()
{
using (var oPool = new NSAutoreleasePool())
{
try
{
// Create a quick preview.
UIImageView oImgView = PdfViewerHelpers.GetLowResPagePreview (this.oPdfDoc.GetPage (iPage), new RectangleF (0, 0, 150, 200));
this.InvokeOnMainThread(delegate
{
if(this.oScrollSliderPreview != null)
{
oImgView.Center = new PointF(this.oScrollSliderPreview.Bounds.Width/2, this.oScrollSliderPreview.Bounds.Height/2);
// Add the PDF image to the preview view.
this.oScrollSliderPreview.AddSubview(oImgView);
}
});
}
catch (Exception)
{
}
}
});
// Start the thread.
this.oRenderScrollPreviewImagesThread.Start ();
}
To render the PDF image, I use this:
internal static UIImageView GetLowResPagePreview (CGPDFPage oPdfPage, RectangleF oTargetRect)
{
RectangleF oPdfPageRect = oPdfPage.GetBoxRect (CGPDFBox.Media);
// If preview is requested for the PDF index view, render a smaller version.
float fAspectScale = 1.0f;
if (!oTargetRect.IsEmpty)
{
fAspectScale = GetAspectZoomFactor (oTargetRect.Size, oPdfPageRect.Size, false);
// Resize the PDF page so that it fits the target rectangle.
oPdfPageRect = new RectangleF (new PointF (0, 0), GetFittingBox (oTargetRect.Size, oPdfPageRect.Size));
}
// Create a low res image representation of the PDF page to display before the TiledPDFView
// renders its content.
int iWidth = Convert.ToInt32 ( oPdfPageRect.Size.Width );
int iHeight = Convert.ToInt32 ( oPdfPageRect.Size.Height );
CGColorSpace oColorSpace = CGColorSpace.CreateDeviceRGB();
CGBitmapContext oContext = new CGBitmapContext(null, iWidth, iHeight, 8, iWidth * 4, oColorSpace, CGImageAlphaInfo.PremultipliedLast);
// First fill the background with white.
oContext.SetFillColor (1.0f, 1.0f, 1.0f, 1.0f);
oContext.FillRect (oPdfPageRect);
// Scale the context so that the PDF page is rendered
// at the correct size for the zoom level.
oContext.ScaleCTM (fAspectScale, fAspectScale);
oContext.DrawPDFPage (oPdfPage);
CGImage oImage = oContext.ToImage();
UIImage oBackgroundImage = UIImage.FromImage( oImage);
oContext.Dispose();
oImage.Dispose ();
oColorSpace.Dispose ();
UIImageView oBackgroundImageView = new UIImageView (oBackgroundImage);
oBackgroundImageView.Frame = new RectangleF (new PointF (0, 0), oPdfPageRect.Size);
oBackgroundImageView.ContentMode = UIViewContentMode.ScaleToFill;
oBackgroundImageView.UserInteractionEnabled = false;
oBackgroundImageView.AutoresizingMask = UIViewAutoresizing.None;
return oBackgroundImageView;
}
Avoid Thread.Abort().
Yeah, here are some links talking about it:
http://www.interact-sw.co.uk/iangblog/2004/11/12/cancellation
http://haacked.com/archive/2004/11/12/how-to-stop-a-thread.aspx
If you can use the .Net 4.0 features, go with them instead. Using a Task<T> is probably easier to work with in your case.
Also, I think it would be helpful to create some throttling and only start your new thread after 25-100 milliseconds of inactivity from the user.
Get rid of Thread.Abort() as Jonathan suggests.
Don't start a new thread for each image and instead have one background thread with a work queue. Then simply put the most important page to the front of the queue so it renders as soon as possible. You can also optionally limit the size of the queue or remove unneeded items from it.
How can I fill the holes in binary image in emgu cv?
In Aforge.net it's easy, use Fillholes class.
Thought the question is a little bit old, I'd like to contribute an alternative solution to the problem.
You can obtain the same result as Chris' without memory problem if you use the following:
private Image<Gray,byte> FillHoles(Image<Gray,byte> image)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
The good thing about the method above is that you can selectively fill holes that meets your criteria. For example, you may want to fill holes whose pixel count (count of black pixels inside the blob) is below 50, etc.
private Image<Gray,byte> FillHoles(Image<Gray,byte> image, int minArea, int maxArea)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
if ( (contour.Area < maxArea) && (contour.Area > minArea) )
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
Yes there is a method but it's a bit messy as its based on cvFloodFill operation. Now all this algorithm is designed to do is fill an area with a colour until it reaches an edge similar to a region growing algorithm. To use this effectively you need to use a little inventive coding but I warn you this code is only to get you started it may require re-factoring to speed things up . As it stands the loop goes through each of your pixels that are less then 255 applies cvFloodFill checks what size the area is and then if it is under a certain area fill it in.
It is important to note that a copy of the image is made of the original image to be supplied to the cvFloodFill operation as a pointer is used. If the direct image is supplied then you will end up with a white image.
OpenFileDialog OpenFile = new OpenFileDialog();
if (OpenFileDialog.ShowDialog() == DialogResult.OK)
{
Image<Bgr, byte> image = new Image<Bgr, byte>(OpenFile.FileName);
for (int i = 0; i < image.Width; i++)
{
for (int j = 0; j < image.Height; j++)
{
if (image.Data[j, i, 0] != 255)
{
Image<Bgr, byte> image_copy = image.Copy();
Image<Gray, byte> mask = new Image<Gray, byte>(image.Width + 2, image.Height + 2);
MCvConnectedComp comp = new MCvConnectedComp();
Point point1 = new Point(i, j);
//CvInvoke.cvFloodFill(
CvInvoke.cvFloodFill(image_copy.Ptr, point1, new MCvScalar(255, 255, 255, 255),
new MCvScalar(0, 0, 0),
new MCvScalar(0, 0, 0), out comp,
Emgu.CV.CvEnum.CONNECTIVITY.EIGHT_CONNECTED,
Emgu.CV.CvEnum.FLOODFILL_FLAG.DEFAULT, mask.Ptr);
if (comp.area < 10000)
{
image = image_copy.Copy();
}
}
}
}
}
The "new MCvScalar(0, 0, 0), new MCvScalar(0, 0, 0)," are not really important in this case as you are only filling in results of a binary image. YOu could play around with other settings to see what results you can achieve. "if (comp.area < 10000)" is the key constant to change is you want to change what size hole the method will fill.
These are the results that you can expect:
Original
Results
The problem with this method is it's extremely memory intensive and it managed to eat up 6GB of ram on a 200x200 image and when I tried 200x300 it ate all 8GB of my RAM and brought everything to a crashing halt. Unless a majority of your image is white and you want to fill in tiny gaps or you can minimise where you apply the method I would avoid it. I would suggest writing you own class to examine each pixel that is not 255 and add the number of pixels surrounding it. You can then record the position of each pixel that was not 255 (in a simple list) and if your count was bellow a threshold set these positions to 255 in your images (by iterating though the list).
I would stick with the Aforge FillHoles class if you do not wish to write your own as it is designed for this purpose.
Cheers
Chris
you can use FillConvexPoly
image.FillConvexPoly(externalContours.ToArray(), new Gray(255));