I need to detect a spiral shaped spring and count its coil turns.
I have tried as follows:
Image<Bgr, Byte> ProcessImage(Image<Bgr, Byte> img)
{
Image<Bgr, Byte> imgClone = new Image<Bgr,byte>( img.Width, img.Height);
imgClone = img.Clone();
Bgr bgrRed = new Bgr(System.Drawing.Color.Red);
#region Algorithm 1
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
Image<Gray, Byte> imgCloneGray = new Image<Gray, byte>(imgClone.Width, imgClone.Height);
CvInvoke.cvCvtColor(imgClone, imgCloneGray, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_BGR2GRAY);
imgCloneGray = imgCloneGray.Canny(c_thresh, c_threshLink);//, (int)c_threshSize);
Contour<System.Drawing.Point> pts = imgCloneGray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
CvInvoke.cvCvtColor(imgCloneGray, imgCloneYcc, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_GRAY2BGR);
if (null != pts)
{
imgClone.Draw(pts, bgrRed, 2);
imgClone.Draw(pts.BoundingRectangle, bgrRed, 2);
}
#endregion
return imgClone;
}
I am some how able to get the spring but how to get the counts. I am looking for algorithms.
I am currently not looking for speed optimization.
This is similar like counting fingers. Spring spiral is very thin to get using contour. What else can be done. http://www.luna-arts.de/others/misc/HandsNew.zip
You have a good final binarization over there, but it looks like to be too restricted to this single case. I would do a relatively simpler, but probably more robust, preprocessing to allow a relatively good binarization. From Mathematical Morphology, there is a transform called h-dome, which is used to remove irrelevant minima/maxima by suppressing minima/maxima of height < h. This operation might not be readily available in your image processing library, but it is not hard to implement it. To binarize this preprocessed image I opted for Otsu's method, since it is automatic and statistically optimal.
Here is the input image after h-dome transformations, and the binary image:
Now, to count the number of "spiral turns" I did something very simple: I split the spirals so I can count them as connected components. To split them I did a single morphological opening with a vertical line, followed by a single dilation by an elementary square. This produces the following image:
Counting the components gives 15. Since you have 13 of them that are not too close, this approach counted them all correctly. The groups at left and right were counted as a single one.
The full Matlab code used to do these steps:
f = rgb2gray(imread('http://i.stack.imgur.com/i7x7L.jpg'));
% For this image, the two next lines are optional as they will to lead
% basically the same binary image.
f1 = imhmax(f, 30);
f2 = imhmin(f1, 30);
bin1 = ~im2bw(f2, graythresh(f2));
bin2 = bwmorph(imopen(bin1, strel('line', 15, 90)), 'dilate');
Related
How to complete a square using 4 lines?
<- Please see the picture.
the square
trying to find a min' square
Hi, I work with EMGUcv and I am supposed to identify 4 lines from which to complete an exact square. Maybe someone has an idea? I tried to find the least square but it is not exact and it is difficult to work with the results of it.
Thanks.
Image<Bgr, Byte> Clean_Image = new Image<Bgr, Byte>(Original).CopyBlank();
Method.ERectangle ERRectangle = null;//<------my Rectangle object..
if (Pointlist.Count > 0)
{
ERRectangle = new Method.ERectangle(Emgu.CV.CvInvoke.MinAreaRect(Pointlist.ToArray()));
Size s = new Size((int)Method.LineSize(ERRectangle.MainRectangle.Points[0], ERRectangle.MainRectangle.Points[2]),(int)Method.LineSize(ERRectangle.MainRectangle.Points[1], ERRectangle.MainRectangle.Points[3]));
Method.ERectangle rRect = new Method.ERectangle(new RotatedRect(ERRectangle.Center, s, (float)ERRectangle.ContourAngle-(float)45));
//I tried to find a square and align with the lines.. A possible idea but not accurate
Emgu.CV.CvInvoke.Polylines(Clean_Image, rRect.MainRectangle.Points.ToArray(), true, new MCvScalar(255, 0, 0), 7);
}
I have a series of images like:
I want to remove all those small little irregular shapes, to get only big circular shape.
I have tried denoising:
Cv2.FastNlMeansDenoising(myMat, myMat,h:3);
But not getting nice results and process is slowed, so it appears it needs other processing, so I tried, dilate and blur:
int erosionSize = 2;
Mat element = Cv2.GetStructuringElement(MorphShapes.Cross,
new OpenCvSharp.Size(2 * erosionSize + 1, 2 * erosionSize + 1),
new OpenCvSharp.Point(erosionSize, erosionSize));
Cv2.Dilate(myMat, myMat, element, iterations: 2);
Cv2.Blur(myMat, myMat, new OpenCvSharp.Size(9, 9));
but getting something like
I guess maybe using hsv or something would help, what would be a better approach?
In my application I online process images (1920x400) with frame rate up to 350 fps. From these images I calculate continuously the area of black object situated in the middle of the picture. The object has often few white holes.
Sample image 1:
Sample image 2:
For calculation of area I'm currently using the emguCV function CvInvoke.CountNonZero(Mat) on the thresholded (B/W) image. This solution runs pretty OK, but my problem are the holes, which could mispresent the results rapidly.
I tried to use morphological erosion in order to fill the holes
if (erodeEnable)
{
Mat kernel = new Mat(5, 5, DepthType.Cv8U, 1);
kernel.SetTo(new MCvScalar(1));
CvInvoke.Erode(postProcessTempMat, postProcessTempMat, new Mat(5, 5, DepthType.Cv8U, 1), new System.Drawing.Point(0,0), erodeIterations, BorderType.Default, new MCvScalar(0, 0, 0));
}
but this operation deforms the object shape dramatically. I want to fill the holes, but keep the outer shape if poss unchanged.
I also tried to use my own method, which column by column searches first upper and lower black pixel and from the positions of these pixels calculate the area of the outer shape.
public int getBlackArea(Mat matImage)
{
Image<Gray, Byte> tempImg = matImage.ToImage<Gray, Byte>();
int result = 0;
int coll = 0;
int upperBorder, lowerBorder;
const int avFactor = 3;
result = 0;
for (coll = 0; coll < tempImg.Width; coll=coll+avFactor)
{
lowerBorder = 0;
while ((tempImg.Data[lowerBorder,coll, 0] > 0) && (lowerBorder < tempImg.Height))
{
lowerBorder++;
}
upperBorder = iMAGEhEIGHT - 1;
while ((tempImg.Data[upperBorder,coll, 0] > 0) && (upperBorder > 0))
{
upperBorder--;
}
result += ((upperBorder - lowerBorder) * avFactor);
//tempImg.Data[(upperBorder-5), coll, 0] = 120; // draw the outline for check of function
//tempImg.Data[(lowerBorder+5), coll, 0] = 120; // draw the outline for check of function
}
//tempImg.Save(#"C:\\pics\INFOoutlined.jpg");
return result;
}
This method returns very good results, but it is too time-demanding.
Don't you have any idea, how to improve my function in order to speed it up? Or don't you have any other idea, how could I reach the results?
Many thanks in advance :)
It all depends on what morphological operation you choose to perform.
I first binarized the images by selecting an optimal threshold.
Then I performed morphological open. Unlike stated by yvs, morphological close does not change the image at all. Since the holes to be filled are surrounded by black pixels morphological open solves the problem.
If however, the holes were surrounded by white pixels, morphological close would do the trick.
OUTPUT
Image 1:
Threshold:
Morphological Open:
Image 2:
Threshold:
Morphological Open:
As you can see, in both the cases the holes have been filled.
I am trying to recognize the person in video (not by his face but by his body). What I have done so far now is to find the HOG,HS and RGB histograms of a person and compare it with all other person to find him.
I am using EmguCV but OpenCV's help will also be appreciated.
HOG is Calculated using
Size imageSize = new Size(64, 16);
Size blockSize = new Size(16, 16);
Size blockStride = new Size(16, 8);
Size cellSize = new Size(8, 8);
HOGDescriptor hog = new HOGDescriptor(imageSize, blockSize, blockStride, cellSize);
float[] hogs = hog.Compute(image);
Code to Calculate HS Histograms (Same method is used for RGB)
Image<Gray, byte>[] channels = hsvImage.Copy().Split();
Image<Gray, byte> hue = channels[0];
Image<Gray, byte> sat = channels[1];
dh.Calculate<byte>(new Image<Gray, byte>[] { hue }, true, null);
dh2.Calculate<byte>(new Image<Gray, byte>[] { sat }, true, null);
float[] huehist = dh.GetBinValues();
float[] sathist = dh2.GetBinValues();
Calculating distance of 2 histograms using
double distance = 0;
for (int i = 0; i < hist1.Length; i++)
{
distance += Math.Abs(hist1[i] - hist2[i]);
}
return distance;
What is happening
I am trying to track a selected person from video feed. and the person can move from camera to camera.
The body of personA is extracted from the video frame, whose HOG,HS,RGB histograms are calculated and stored... then from next frame the histograms of all detected persons are calculated and matched with the histograms of personA the most matched histogram (with minimum distance) is considered as the same person (personA)... so it is continued to track that person...
Problem
Accuracy is not good (sometimes it tells 2 people, with very different colored cloths, same)
Suggestions
What should I change/remove
Should I calculate histograms using CvInvoke.CalcHist(...); instead of dense histograms for HS and RGB
Should I normalize histograms before calculating distances.
Is this normalization method good? (every value minus mean of array)
Or should I try something else.
Any kind of help/suggestion will be HIGHLY APPRECIATED if any more info is needed then please comment.
Thanks,
I am also working on the same project, we have same idea of work, i have solutions for this problem.
Solution 1:
after detection of the person crop the detected person extract features, save those features, next them time you wish to track the person extract feature form the frame compare them, I have wrote the whole algorithm
Solution 2
(incase you want to speed up) find the region of the person convert to binary save edges convert the whole frame to binary, find human body region
I have find other tricks for accuracy please contact me on email i have problem writing code we might work together find the best solution
I have an image and I want to find the most dominant lowest hue colour and the most dominant highest hue colour from an image.
It is possible that there are several colours/hues that are close to each other in populace to be dominant and if that is the case I would need to take an average of the most popular.
I am using emgu framework here.
I load an image into the HSV colour space.
I split the hue channel away from the main image.
I then use the DenseHistogram to return my ranges of 'buckets'.
Now, I could enumerate through the bin collection to get what I want but I am mindful of conserving memory when and wherever I can.
So, is there a way of getting what I need at all from the DenseHistogram 'object'?
i have tried MinMax (as shown below) and I have consider using linq but not sure if that is expensive to use and/or how to use it with just using a float array.
This is my code so far:
float[] GrayHist;
Image<Hsv, Byte> hsvSample = new Image<Hsv, byte>("An image file somewhere");
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Histo.Calculate(new Image<Gray, Byte>[] { hsvSample[0] }, true, null);
GrayHist = new float[256];
Histo.MatND.ManagedArray.CopyTo(GrayHist, 0);
float mins;
float maxs;
int[] minVals;
int[] maxVals;
Histo.MinMax(out mins, out maxs, out minVals, out maxVals); //only gets lowest and highest and not most popular
List<float> ranges= GrayHist.ToList().OrderBy( //not sure what to put here..