How to detect number of objects from image? - c#

I have a windows forms application and i want to count number of objects from a medical images. For instance
I used an algorithm which can take the contours of every object from the image.
private void findContoursToolStripMenuItem_Click(object sender, EventArgs e)
{
Image<Gray, byte> imgOutput = imgInput.Convert<Gray, byte>().ThresholdBinary(new Gray(100), new Gray(255));
Emgu.CV.Util.VectorOfVectorOfPoint contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hier = new Mat();
Image<Gray, byte> imgout = new Image<Gray, byte>(imgInput.Width, imgInput.Height, new Gray(100));
CvInvoke.FindContours(imgOutput, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(imgout, contours, -1, new MCvScalar(123, 0, 0));
pictureBox2.Image = imgout.Bitmap;
}
But I can't find out the number of cells from the image. Any advice or algorithm I have to use ?
I search within EMGU documentation but I don't find any function which does somethink like I want.
Any advice or answer will be rewarded.
If you consider that is too broad, I don't want the implemented algorithm. I just need some ideas or a suggestion of algorithm i have to use.

It probably a bit basic and brute force, but how about selecting a random point on the image that is close to the green colour, then effectively search for 'matching' colours (with a tolerance for the same colour. As you visit each pixel, colour it black so you don't look at it again and count how many pixels you have coloured in. Each time you select a pixel, make sure it's not black. Once you can't find any more points, if the number of black pixels is greater than a tolerance (so you only find 'big' polygons), then count it in the number of cells.

Related

Shift Emgu.CV.Image to the right using WarpAffine()

my goal is, to shift an Image an x-amount of pixels to the right. I guess this can be achieved by using WarpAffine. This is at least what my reseach tells me. I used quite a variety of different approaches, like:
CvInvoke.WarpAffine(realImage, transformedImage, transformationMatrix, new Size(realImage.Size);
//or while m is the Mat used to create realImage
transImage.WarpAffine(m,Emgu.CV.CvEnum.Inter.Area,Emgu.CV.CvEnum.Warp.Default,Emgu.CV.CvEnum.BorderType.Default,new Gray());
I get the Exeption:
Exception thrown: 'Emgu.CV.Util.CvException' in Emgu.CV.Platform.NetStandard.dll
[ WARN:0] global E:\bb\cv_x86\build\opencv\modules\videoio\src\cap_msmf.cpp (436) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
I guess I am using it the wrong way, but there is no suiting example online for me to learn from.
Does anyone has a clean way to explain it to me?
Thank you in advance!
If you are shifting the pixels x amount to the right, I assume that there would be black empty pixels on the left side? If so, you could create an ROI and cut off some pixels on the right, since you are shifting all pixels to the right, and copy the image onto another image.
//The image that you want to shift pixels with
Image<Bgr, byte> inputImage = new Image<Bgr, byte>(1000, 1000);
//The output image
Image<Bgr, byte> image = new Image<Bgr, byte>(990, 1000);
//Create the roi, with 10 pixels cut off from the right side because of the shift
inputImage.ROI = new Rectangle(0, 0, inputImage.Width - 10, inputImage.Height);
inputImage.CopyTo(image);
CvInvoke.ImShow("The Output", image);
CvInvoke.WaitKey(0);
EDIT
Now lets say you want to keep that black stripe on the left side of the image as well. Doing this is very similar to the code above, but only with a few modifications.
//The image that you want to shift pixels with
Image<Bgr, byte> inputImage = new Image<Bgr, byte>(1000, 1000);
//The output image
Image<Bgr, byte> image = new Image<Bgr, byte>(1000, 1000);
//Create the roi, with 10 pixels cut off from the right side because of the shift
inputImage.ROI = new Rectangle(0, 0, inputImage.Width - 10, inputImage.Height);
//The image we want to shift, the same one we created a ROI with,
//has the dimensions 990 X 1000 and the output image has
//dimensions of 1000 x 1000. Unfortunately, in order to paste
//an image onto another image, they need to be the same dimensions.
//How do we do this? We must create an ROI with the output image
//that has the same dimensions as the input image.
image.ROI = new Rectangle(10, 0, image.Width, image.Height);
//Now we can past the image onto the output because the dimensions match
inputImage.CopyTo(image);
//Inorder to make our output seem normal, we must empty the ROI of the output image
image.ROI = Rectangle.Empty;
CvInvoke.ImShow("The Output", image);
CvInvoke.WaitKey(0);

FloodFill function producing weird results

I need to use the floodFill function of EMGUCV in C#.
I managed to use it like this:
Image<Gray, Byte> img = new Image<Gray, byte>(this.B);
img.Save("orig.png");
int height = img.Rows;
int width = img.Cols;
Point s = new Point( 227, 295);
int tol = 8;
Mat outputMask = new Mat(height + 2, width + 2, Emgu.CV.CvEnum.DepthType.Cv8U, 1);
Rectangle dummyRect = new Rectangle();
CvInvoke.FloodFill(img,
outputMask,
s,
new MCvScalar(255, 0, 0),
out dummyRect,
new MCvScalar(tol), new MCvScalar(tol),
Emgu.CV.CvEnum.Connectivity.EightConnected);
//Emgu.CV.CvEnum.FloodFillType.MaskOnly);
img.Save("imgModif.png");
Console.Write("End");
However the output I get in img (or in outputMask) when I use it does not make any sense at all. Some white rectangular areas. I am expecting the results of a "photoshop like" magic wand. ANy idea of what is wrong? Moreover, even though I specify (255,0,0) as the fill color, the result is white in img. (B is a preloaded Bitmap image)
I am trying to do the same as this:
http://www.andrew-seaford.co.uk/flood-fill-opencv/
except I work with a grayscale image initially.
THis is what i get if I set the tol=255 (which should fill the whole image in white, if this function was really behaving like a magic wand):
Lena tol=255
This ''rectangle'' that can be seen does not make sense, even worse, you can see some white pixels going out of this rectangle region (on Lena's hat)... so it does not even seem to be a sort of rectangular constraint. Also a few pixels in the "pseudo rectangle" are not white... I would be curious if someone could test this function on their system with a lena image.

Emgucv turn the black background transparent

I'm very new to Emgucv, so need a little help?
The code below is mainly taken from various places from Google. It will take a jpg file (which has a green background) and allow, from a separate form to change the values of h1 and h2 settings so as to create (reveal) a mask.
Now what I want to be able to do with this mask is to turn it transparent.
At the moment it will just display a black background around a person (for example), and then saves to file.
I need to know how to turn the black background transparent, if this is the correct way to approach this?
Thanks in advance.
What I have so far is in C# :
imgInput = new Image<Bgr, byte>(FileName);
Image<Hsv, Byte> hsvimg = imgInput.Convert<Hsv, Byte>();
//extract the hue and value channels
Image<Gray, Byte>[] channels = hsvimg.Split(); // split into components
Image<Gray, Byte> imghue = channels[0]; // hsv, so channels[0] is hue.
Image<Gray, Byte> imgval = channels[2]; // hsv, so channels[2] is value.
//filter out all but "the color you want"...seems to be 0 to 128 (64, 72) ?
Image<Gray, Byte> huefilter = imghue.InRange(new Gray(h1), new Gray(h2));
// TURN IT TRANSPARENT somewhere around here?
pictureBox2.Image = imgInput.Copy(mask).Bitmap;
imgInput.Copy(mask).Save("changedImage.png");
I am not sure I really understand what you are trying to do. But a mask is a binary object. A mask is usually black for what you do not want and white for what you do. As far as I know, there is no transparent mask, as to me that makes no sense. Masks are used to extract parts of an image by masking out the rest.
Maybe you could elaborate on what it is you want to do?
Doug
I think I may have the solution I was looking for. I found some code on stackoverflow which I've tweaked a little :
public Image<Bgra, Byte> MakeTransparent(Image<Bgr, Byte> image, double r1, double r2)
{
Mat imageMat = image.Mat;
Mat finalMat = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 4);
Mat tmp = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
Mat alpha = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
CvInvoke.CvtColor(imageMat, tmp, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(tmp, alpha, (int)r1, (int)r2, ThresholdType.Binary);
VectorOfMat rgb = new VectorOfMat(3);
CvInvoke.Split(imageMat, rgb);
Mat[] rgba = { rgb[0], rgb[1], rgb[2], alpha };
VectorOfMat vector = new VectorOfMat(rgba);
CvInvoke.Merge(vector, finalMat);
return finalMat.ToImage<Bgra, Byte>();
}
I'm now looking at adding SmoothGaussian to the mask to create a kind on blend, where the two images are layered, rather than a sharp cut-out.

Improve a fitted Ellipse using EmguCV (2.4) in C#

I'm using EmguCV 2.4 in C# for edge detection and ellipse fitting of ellipsoid objects in a picture (e.g. laser spot).
EmguCV has implemented functions for fitting an ellipse to a point cloud using the least squares method, but the ellipses do not fit very well to the ellipsoids, depending on their angle.
Here's a basic code, I'm using:
CHAIN_APPROX_METHOD approxMethod = CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE;
RETR_TYPE RetrType = RETR_TYPE.CV_RETR_LIST;
Bitmap Bmp = new Bitmap(#"C:\Users\pernizki\Downloads\binary2.bmp"); // read Bitmap
Image<Bgr, byte> ImgIn = new Image<Bgr, byte>(Bmp); // Convert to Image<>
Image<Gray, byte> ImgBin = ImgIn.Convert<Gray, byte>(); // convert Bgr to Gray
ImgBin = ImgBin.ThresholdBinary(new Gray(140), new Gray(120)).PyrDown().PyrUp(); // convert to binary and reduce noise
ImgBin = ImgBin.Canny(100, 120); // detect the edges from binary image
Contour<Point> Contour = ImgBin.FindContours(approxMethod, RetrType); // Get Contour from binary Image
int cntr = 0;
PointF[] ContourPts = new PointF[Contour.Total];
// Convert Contour to PointF Array
foreach(Point p in Contour)
{
ContourPts[cntr++] = new PointF(p.X, p.Y);
}
Ellipse fittedEllipse = PointCollection.EllipseLeastSquareFitting(ContourPts);
ImgIn.Draw(Contour, new Bgr(Color.Green), 2);
ImgIn.Draw(fittedEllipse, new Bgr(Color.Tomato), 2);
CvInvoke.cvShowImage("Fitted Ellipse", ImgIn.Ptr);
When I have vertical or horizontal ellipse as an input image, the fitted ellipse is always 90° rotated to the input shape. If the angle is something between 0° and 90° the fitted ellipse is still rotated, but at different angle.
I understand, that this problem is fundamental to the least square method. But are there more robust algorithms to fit an ellipse, that is enclosing all of the points?
The picture in the EmguCV tutorial for fitting an ellipse seems to be exactly, what I'm looking for. Unfortunately, that's not how the function works.
Here's an example image, that I've used.
(sorry for having only one, as I don't have enough reputation for more)
rotated Ellipse

Point Tracking using Optical Flow

Hello I'm trying to apply point tracking to a scene.
Now I want to get the points only moving in horizontally. Anyone have any thoughts on this?
The arrays "Actual" and "nextfeature" contain the relevant x,y coordinates. I tried to get the difference from the two arrays, it did not work. I tried to get the optical flow using Farneback but it didn't gave me a satisfying result. I would really appreciate if anyone can give me any thoughts on how to get the points only moving in horizontal line.
thanks.
Here is the code.
private void ProcessFrame(object sender, EventArgs arg)
{
PointF[][] Actual = new PointF[0][];
if (Frame == null)
{
Frame = _capture.RetrieveBgrFrame();
Previous_Frame = Frame.Copy();
}
else
{
Image<Gray, byte> grayf = Frame.Convert<Gray, Byte>();
Actual = grayf.GoodFeaturesToTrack(300, 0.01d, 0.01d, 5);
Image<Gray, byte> frame1 = Frame.Convert<Gray, Byte>();
Image<Gray, byte> prev = Previous_Frame.Convert<Gray, Byte>();
Image<Gray, float> velx = new Image<Gray, float>(Frame.Size);
Image<Gray, float> vely = new Image<Gray, float>(Previous_Frame.Size);
Frame = _capture.RetrieveBgrFrame().Resize(300,300,Emgu.CV.CvEnum.INTER.CV_INTER_AREA);
Byte []status;
Single[] trer;
PointF[][] feature = Actual;
PointF[] nextFeature = new PointF[300];
Image<Gray, Byte> buf1 = new Image<Gray, Byte>(Frame.Size);
Image<Gray, Byte> buf2 = new Image<Gray, Byte>(Frame.Size);
opticalFlowFrame = new Image<Bgr, Byte>(prev.Size);
Image<Bgr, Byte> FlowFrame = new Image<Bgr, Byte>(prev.Size);
OpticalFlow.PyrLK(prev, frame1, Actual[0], new System.Drawing.Size(10, 10), 0, new MCvTermCriteria(20, 0.03d),
out nextFeature, out status, out trer);
for (int x = 0; x < Actual[0].Length ; x++)
{
opticalFlowFrame.Draw(new CircleF(new PointF(nextFeature[x].X, nextFeature[x].Y), 1f), new Bgr(Color.Blue), 2);
}
new1 = old;
old = nextFeature;
Actual[0] = nextFeature;
Previous_Frame = Frame.Copy();
captureImageBox.Image = Frame;
grayscaleImageBox.Image = opticalFlowFrame;
//cannyImageBox.Image = velx;
//smoothedGrayscaleImageBox.Image = vely;
}
}
First... I can only give you a general idea about this, not a code snippet...
Here's how you may do this:
(One of the many possible approaches of tackling this problem)
Take the zero-th frame and pass it through goodFeaturesToTrack. Collect the points in an array ...say, initialPoints.
Grab the (zero + one) -th frame. With respect to the points grabbed from step 1, run it through calcOpticalFlowPyrLK. Store the next points in another array ...say, nextPoints. Also keep track of status and error vectors.
Now, with initialPoints and nextPoints in tow, we leave the comfort of openCV and do things our way. For every feature in initialPoints and nextPoints (with status set to 1 and error below an acceptable threshold), we calculate the gradient between the points.
Accept only those point for horizontal motion whose angle of slope is either around 0 degrees or 180 degrees. Now... vector directions won't lie perfectly at 0 or 180... so take into account a bit of +/- threshold.
Repeat step 1 to 4 for all frames.
Going through the code you posted... it seems like you've almost nailed steps 1 and 2.
However, once you get the vector nextFeature, it seems like you're drawing circles around it. Interesting ...but not what we need.
Check if you can go about implementing the gradient calculation and filtering.

Categories

Resources