I am using EmguCV and C#. I want to remove small connected objects from my image using ConnectedComponentsWithStats
I have following binary image as input
I am able to draw a rectangle for specified area. Now i want to remove objects from binary image
Here is my code
Image<Gray, byte> imgry = image.Convert<Gray, byte>();
var mask = imgry.InRange(new Gray(50), new Gray(255));
var label = new Mat();
var stats = new Mat();
var centroids = new Mat();
int labels = CvInvoke.ConnectedComponentsWithStats(mask, label, stats,
centroids,
LineType.EightConnected,
DepthType.Cv32S);
var img = stats.ToImage<Gray, int>();
Image<Gray, byte> imgout1 = new Image<Gray, byte>(image.Width, image.Height);
for (var i = 1; i < labels; i++)
{
var area = img[i, (int)ConnectecComponentsTypes.Area].Intensity;
var width = img[i, (int)ConnectecComponentsTypes.Width].Intensity;
var height = img[i, (int)ConnectecComponentsTypes.Height].Intensity;
var top = img[i, (int)ConnectecComponentsTypes.Top].Intensity;
var left = img[i, (int)ConnectecComponentsTypes.Left].Intensity;
var roi = new Rectangle((int)left, (int)top, (int)width, (int)height);
if (area > 1000)
{
CvInvoke.Rectangle(imgout1, roi, new MCvScalar(255), 1);
}
}
Drawn Rectangle for specified area
How can i remove objects of specified size
My output image should be like this
I have achieved one way by using contours it works fine for small image, when i have large image 10240*10240 and more number of particles my application entering to break mode
Related
I am trying to find all differences between two images which are having different offset. I am able to align the images using ORB Detector but when I try to find differences using CvInvoke.AbsDiff all objects are shown as different even though only few are changed in second image.
Image alignment logic
string file2 = "E://Image1.png";
string file1 = "E://Image2.png";
var image = CvInvoke.Imread(file1, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(image, image, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
var template = CvInvoke.Imread(file2, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(template, template, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
Emgu.CV.Features2D.ORBDetector dt = new Emgu.CV.Features2D.ORBDetector(50000);
Emgu.CV.Util.VectorOfKeyPoint kpsA = new Emgu.CV.Util.VectorOfKeyPoint();
Emgu.CV.Util.VectorOfKeyPoint kpsB = new Emgu.CV.Util.VectorOfKeyPoint();
var descsA = new Mat();
var descsB = new Mat();
dt.DetectAndCompute(image, null, kpsA, descsA, false);
dt.DetectAndCompute(template, null, kpsB, descsB, false);
Emgu.CV.Features2D.DescriptorMatcher matcher = new Emgu.CV.Features2D.BFMatcher(Emgu.CV.Features2D.DistanceType.Hamming);
matcher.Add(descsA);
VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch();
var mask = new Mat(matches.Size, 1, Emgu.CV.CvEnum.DepthType.Cv8U, 1);
mask.SetTo(new Bgr(Color.White).MCvScalar);
matcher.KnnMatch(descsB, matches, 2, mask);
Image<Bgr, byte> aligned = new Image<Bgr, byte>(image.Size);
Image<Bgr, byte> match = new Image<Bgr, byte>(template.Size);
Emgu.CV.Features2D.Features2DToolbox.VoteForUniqueness(matches, 0.80, mask);
PointF[] ptsA = new PointF[matches.Size];
PointF[] ptsB = new PointF[matches.Size];
MKeyPoint[] kpsAModel = kpsA.ToArray();
MKeyPoint[] kpsBModel = kpsB.ToArray();
for (int i = 0; i < matches.Size; i++)
{
ptsA[i] = kpsAModel[matches[i][0].TrainIdx].Point;
ptsB[i] = kpsBModel[matches[i][0].QueryIdx].Point;
}
Mat homography = CvInvoke.FindHomography(ptsA, ptsB, Emgu.CV.CvEnum.HomographyMethod.Ransac);
CvInvoke.WarpPerspective(image, aligned, homography, template.Size);
I have attached the images used. Is there any suggestion to resolve this?
Thanks.
I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
I have lots of colors ranges need to filter then combine achieved images to get single image contained all those filtered colors. like this:
Image<Gray, Byte> grayscale2 = frame2.Convert<Gray, Byte>();
for (int i = 1; i < colors.Length - 1; i++)
{
var color1 = colors[i].Split('-');
var color2 = colors[i+1].Split('-');
var img = frame2.InRange(new Bgr(double.Parse(color1[0]),
double.Parse(color1[1]), double.Parse(color1[2])),
new Bgr(double.Parse(color2[0]), double.Parse(color2[1]),
double.Parse(color2[2]))).Convert<Gray, Byte>();
}
"colors" is an array of RGB saved colors as string.
I am looking for the fastest way to combine (merge) all img in grayscale2.
Thank you.
I don't know exactly what do you want. I made a simple program in opencv in python. AS you have grayscale image you can add this images, but you need to remember this. If in image1 pixel has value 150 and in image2 value 150, final pixel has 255 value. So you have to add them with weight.
import cv2 as cv
import numpy as np
img1= cv.imread('image1.jpg')
img2= cv.imread('image2.jpg')
hsv1 = cv.cvtColor(img1, cv.COLOR_BGR2HSV)
hsv2 = cv.cvtColor(img2, cv.COLOR_BGR2HSV)
lower_blue = np.array([110, 50, 50])
upper_blue = np.array([130, 255, 255])
mask1 = cv.inRange(hsv1, lower_blue, upper_blue)
mask2 = cv.inRange(hsv2, lower_blue, upper_blue)
alpha=0.5
beta =0.5
output =cv.addWeighted( mask1, alpha, mask2, beta, 0.0, )
cv.imshow('av1',img1)
cv.imshow('av2',img2)
cv.imshow('av3',mask1)
cv.imshow('av4',mask2)
cv.imshow('av4',output)
cv.waitKey(0)
I did something like this by convert the images to bitmap first then combine them it's very faster:
public static Bitmap CombineBitmap(string[] files)
{
//change the location to store the final image.
Bitmap img = new Bitmap(files[0]);
Bitmap img3 = new Bitmap(img.Width, img.Height);
Graphics g = Graphics.FromImage(img3);
g.Clear(SystemColors.AppWorkspace);
foreach (string file in files)
{
img = new Bitmap(file);
img.MakeTransparent(Color.White);
g.DrawImage(img, new Point(0, 0));
//img3.MakeTransparent(Color.White);
}
using (var b = new Bitmap(img3.Width, img3.Height))
{
b.SetResolution(img3.HorizontalResolution, img3.VerticalResolution);
using (var g2 = Graphics.FromImage(b))
{
g2.Clear(Color.White);
g2.DrawImageUnscaled(img3, 0, 0);
}
// Now save b as a JPEG like you normally would
return img3;
}
INPUT IMAGE
Hi I am try to learn EmguCV 3.3 and I have a question about blob counting.As you see in INPUT IMAGE I have black uneven blobs.
I am try to do something like this.
OUTPUT IMAGE
I need to draw rectangle around blobs and count them.
I tryied some approches but non of it work.
I need Help();
You can use FindCountours() or SimpleBlobDetection() to achieve that, here is an example code uses the first one:
Image<Gray, Byte> grayImage = new Image<Gray,Byte>(mRGrc.jpg);
Image<Gray, Byte> canny = new Image<Gray, byte>(grayImage.Size);
int counter = 0;
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage);contours != null; contours = contours.HNext)
{
contours.ApproxPoly(contours.Perimeter * 0.05, storage);
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
counter++;
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
}
Console.Writeline("Number of blobs: " + counter);
Image<Gray, Byte>[] trainingImages = new Image<Gray,Byte>[5];
trainingImages[0] = new Image<Gray, byte>("MyPic.jpg");
String[] labels = new String[] { "mine"}
MCvTermCriteria termCrit = new MCvTermCriteria(1, 0.001);
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages,
labels,
1000,
ref termCrit);
Image<Gray,Byte> testImage = new Image<Gray,Byte>("sample_photo.jpg");
var result= recognizer.Recognize(testImage);
result.label always returns string "mine"(label of the training image) for every face it detects.
result.label must be the returned when the detected faces are same in the two images, instead it returns same label for every face.
What is the problem with my code.