Body recognition in EmguCV/OpenCV (Tracking person in video) - c#

I am trying to recognize the person in video (not by his face but by his body). What I have done so far now is to find the HOG,HS and RGB histograms of a person and compare it with all other person to find him.
I am using EmguCV but OpenCV's help will also be appreciated.
HOG is Calculated using
Size imageSize = new Size(64, 16);
Size blockSize = new Size(16, 16);
Size blockStride = new Size(16, 8);
Size cellSize = new Size(8, 8);
HOGDescriptor hog = new HOGDescriptor(imageSize, blockSize, blockStride, cellSize);
float[] hogs = hog.Compute(image);
Code to Calculate HS Histograms (Same method is used for RGB)
Image<Gray, byte>[] channels = hsvImage.Copy().Split();
Image<Gray, byte> hue = channels[0];
Image<Gray, byte> sat = channels[1];
dh.Calculate<byte>(new Image<Gray, byte>[] { hue }, true, null);
dh2.Calculate<byte>(new Image<Gray, byte>[] { sat }, true, null);
float[] huehist = dh.GetBinValues();
float[] sathist = dh2.GetBinValues();
Calculating distance of 2 histograms using
double distance = 0;
for (int i = 0; i < hist1.Length; i++)
{
distance += Math.Abs(hist1[i] - hist2[i]);
}
return distance;
What is happening
I am trying to track a selected person from video feed. and the person can move from camera to camera.
The body of personA is extracted from the video frame, whose HOG,HS,RGB histograms are calculated and stored... then from next frame the histograms of all detected persons are calculated and matched with the histograms of personA the most matched histogram (with minimum distance) is considered as the same person (personA)... so it is continued to track that person...
Problem
Accuracy is not good (sometimes it tells 2 people, with very different colored cloths, same)
Suggestions
What should I change/remove
Should I calculate histograms using CvInvoke.CalcHist(...); instead of dense histograms for HS and RGB
Should I normalize histograms before calculating distances.
Is this normalization method good? (every value minus mean of array)
Or should I try something else.
Any kind of help/suggestion will be HIGHLY APPRECIATED if any more info is needed then please comment.
Thanks,

I am also working on the same project, we have same idea of work, i have solutions for this problem.
Solution 1:
after detection of the person crop the detected person extract features, save those features, next them time you wish to track the person extract feature form the frame compare them, I have wrote the whole algorithm
Solution 2
(incase you want to speed up) find the region of the person convert to binary save edges convert the whole frame to binary, find human body region
I have find other tricks for accuracy please contact me on email i have problem writing code we might work together find the best solution

Related

emguCV - Quick calculation of area of the object with holes in binary image

In my application I online process images (1920x400) with frame rate up to 350 fps. From these images I calculate continuously the area of black object situated in the middle of the picture. The object has often few white holes.
Sample image 1:
Sample image 2:
For calculation of area I'm currently using the emguCV function CvInvoke.CountNonZero(Mat) on the thresholded (B/W) image. This solution runs pretty OK, but my problem are the holes, which could mispresent the results rapidly.
I tried to use morphological erosion in order to fill the holes
if (erodeEnable)
{
Mat kernel = new Mat(5, 5, DepthType.Cv8U, 1);
kernel.SetTo(new MCvScalar(1));
CvInvoke.Erode(postProcessTempMat, postProcessTempMat, new Mat(5, 5, DepthType.Cv8U, 1), new System.Drawing.Point(0,0), erodeIterations, BorderType.Default, new MCvScalar(0, 0, 0));
}
but this operation deforms the object shape dramatically. I want to fill the holes, but keep the outer shape if poss unchanged.
I also tried to use my own method, which column by column searches first upper and lower black pixel and from the positions of these pixels calculate the area of the outer shape.
public int getBlackArea(Mat matImage)
{
Image<Gray, Byte> tempImg = matImage.ToImage<Gray, Byte>();
int result = 0;
int coll = 0;
int upperBorder, lowerBorder;
const int avFactor = 3;
result = 0;
for (coll = 0; coll < tempImg.Width; coll=coll+avFactor)
{
lowerBorder = 0;
while ((tempImg.Data[lowerBorder,coll, 0] > 0) && (lowerBorder < tempImg.Height))
{
lowerBorder++;
}
upperBorder = iMAGEhEIGHT - 1;
while ((tempImg.Data[upperBorder,coll, 0] > 0) && (upperBorder > 0))
{
upperBorder--;
}
result += ((upperBorder - lowerBorder) * avFactor);
//tempImg.Data[(upperBorder-5), coll, 0] = 120; // draw the outline for check of function
//tempImg.Data[(lowerBorder+5), coll, 0] = 120; // draw the outline for check of function
}
//tempImg.Save(#"C:\\pics\INFOoutlined.jpg");
return result;
}
This method returns very good results, but it is too time-demanding.
Don't you have any idea, how to improve my function in order to speed it up? Or don't you have any other idea, how could I reach the results?
Many thanks in advance :)
It all depends on what morphological operation you choose to perform.
I first binarized the images by selecting an optimal threshold.
Then I performed morphological open. Unlike stated by yvs, morphological close does not change the image at all. Since the holes to be filled are surrounded by black pixels morphological open solves the problem.
If however, the holes were surrounded by white pixels, morphological close would do the trick.
OUTPUT
Image 1:
Threshold:
Morphological Open:
Image 2:
Threshold:
Morphological Open:
As you can see, in both the cases the holes have been filled.

Visualize depth for a single image

I'm new to EmguCV, OpenCV and machine vision in general. I translated the code from this stack overflow question from C++ to C#. I also copied their sample image to help myself understand if the code is working as expected or not.
Mat map = CvInvoke.Imread("C:/Users/Cindy/Desktop/coffee_mug.png", Emgu.CV.CvEnum.LoadImageType.AnyColor | Emgu.CV.CvEnum.LoadImageType.AnyDepth);
CvInvoke.Imshow("window", map);
Image<Gray, Byte> imageGray = map.ToImage<Gray, Byte>();
double min = 0, max = 0;
int[] minIndex = new int[5], maxIndex = new int[5];
CvInvoke.MinMaxIdx(imageGray, out min, out max, minIndex, maxIndex, null);
imageGray -= min;
Mat adjMap = new Mat();
CvInvoke.ConvertScaleAbs(imageGray, adjMap, 255/(max-min), 0);
CvInvoke.Imshow("Out", adjMap);
Original Image:
After Processing:
This doesn't look like a depth map to me, it just looks like a slightly modified grayscale image, so I'm curious where I went wrong in my code. MinMaxIdx() doesn't work without converting the image to grayscale first, unlike the code I linked above. Ultimately, what I'd like to do is to be able to generate relative depth maps from a single webcamera.

Histogram computation with Emgu OpenCV

I would like to get the histogram of an image using Emgu.
I have a Gray scale double image
Image<Gray, double> Crop;
I can get a histogram using
Image<Gray, byte> CropByte = Crop.Convert<Gray, byte>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<Byte>(new Image<Gray, byte>[] { CropByte }, true, null);
The problem is, doing it this way I needed to convert to a byte Image. This is problematic because it skews my results. This gives a slightly different histogram to what I would get if it were possible to use a double image.
I have tried using CvInvoke to use the internal opencv function to compute a histogram.
IntPtr[] x = { Crop };
DenseHistogram cropHist = new DenseHistogram
(
BinCount,
new RangeF
(
MinCrop,
MaxCrop
)
);
CvInvoke.cvCalcArrHist(x, cropHist, false, IntPtr.Zero);
The trouble is I'm finding it hard to find how to use this function correctly
Does emgu/opencv allow me to do this? Do I need to write the function myself?
This is not an EmguCV/OpenCV issue, the idea itself makes no sense as a double histogram would require a lot more memory than what's available. What I say is true when you have an histogram allocated with a fixed size. The only way to go around this would be to have an histogram with dynamic allocation as the image is processed. But this would be dangerous with big images as it could allocate as much memory as the image itself.
I guess that your double image contains many identical values, otherwise an histogram would not be very useful. So one way to go around this is to remap your values to a short (16-bits) instead of byte (8-bits) this way your histogram would be quite similar to what you expect from your double values.
I had a look in histogram.cpp in the opencv source code.
Inside function
void cv::calcHist( const Mat* images, int nimages, const int* channels,
InputArray _mask, OutputArray _hist, int dims, const int* histSize,
const float** ranges, bool uniform, bool accumulate )
There is a section which handles different image types
if( depth == CV_8U )
calcHist_8u(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_16U )
calcHist_<ushort>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_32F )
calcHist_<float>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else
CV_Error(CV_StsUnsupportedFormat, "");
While double images are not handled here yet, float is.
While floating point looses a little bit of precision from double, its not a significant problem.
The following code snippet worked well for me
Image<Gray, float> CropByte = Crop.Convert<Gray, float>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<float>(new Image<Gray, float>[] { CropByte }, true, null);

get the lowest and highest most popular hue color from an image

I have an image and I want to find the most dominant lowest hue colour and the most dominant highest hue colour from an image.
It is possible that there are several colours/hues that are close to each other in populace to be dominant and if that is the case I would need to take an average of the most popular.
I am using emgu framework here.
I load an image into the HSV colour space.
I split the hue channel away from the main image.
I then use the DenseHistogram to return my ranges of 'buckets'.
Now, I could enumerate through the bin collection to get what I want but I am mindful of conserving memory when and wherever I can.
So, is there a way of getting what I need at all from the DenseHistogram 'object'?
i have tried MinMax (as shown below) and I have consider using linq but not sure if that is expensive to use and/or how to use it with just using a float array.
This is my code so far:
float[] GrayHist;
Image<Hsv, Byte> hsvSample = new Image<Hsv, byte>("An image file somewhere");
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Histo.Calculate(new Image<Gray, Byte>[] { hsvSample[0] }, true, null);
GrayHist = new float[256];
Histo.MatND.ManagedArray.CopyTo(GrayHist, 0);
float mins;
float maxs;
int[] minVals;
int[] maxVals;
Histo.MinMax(out mins, out maxs, out minVals, out maxVals); //only gets lowest and highest and not most popular
List<float> ranges= GrayHist.ToList().OrderBy( //not sure what to put here..

How to detect and count a spiral's turns

I need to detect a spiral shaped spring and count its coil turns.
I have tried as follows:
Image<Bgr, Byte> ProcessImage(Image<Bgr, Byte> img)
{
Image<Bgr, Byte> imgClone = new Image<Bgr,byte>( img.Width, img.Height);
imgClone = img.Clone();
Bgr bgrRed = new Bgr(System.Drawing.Color.Red);
#region Algorithm 1
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
Image<Gray, Byte> imgCloneGray = new Image<Gray, byte>(imgClone.Width, imgClone.Height);
CvInvoke.cvCvtColor(imgClone, imgCloneGray, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_BGR2GRAY);
imgCloneGray = imgCloneGray.Canny(c_thresh, c_threshLink);//, (int)c_threshSize);
Contour<System.Drawing.Point> pts = imgCloneGray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
CvInvoke.cvCvtColor(imgCloneGray, imgCloneYcc, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_GRAY2BGR);
if (null != pts)
{
imgClone.Draw(pts, bgrRed, 2);
imgClone.Draw(pts.BoundingRectangle, bgrRed, 2);
}
#endregion
return imgClone;
}
I am some how able to get the spring but how to get the counts. I am looking for algorithms.
I am currently not looking for speed optimization.
This is similar like counting fingers. Spring spiral is very thin to get using contour. What else can be done. http://www.luna-arts.de/others/misc/HandsNew.zip
You have a good final binarization over there, but it looks like to be too restricted to this single case. I would do a relatively simpler, but probably more robust, preprocessing to allow a relatively good binarization. From Mathematical Morphology, there is a transform called h-dome, which is used to remove irrelevant minima/maxima by suppressing minima/maxima of height < h. This operation might not be readily available in your image processing library, but it is not hard to implement it. To binarize this preprocessed image I opted for Otsu's method, since it is automatic and statistically optimal.
Here is the input image after h-dome transformations, and the binary image:
Now, to count the number of "spiral turns" I did something very simple: I split the spirals so I can count them as connected components. To split them I did a single morphological opening with a vertical line, followed by a single dilation by an elementary square. This produces the following image:
Counting the components gives 15. Since you have 13 of them that are not too close, this approach counted them all correctly. The groups at left and right were counted as a single one.
The full Matlab code used to do these steps:
f = rgb2gray(imread('http://i.stack.imgur.com/i7x7L.jpg'));
% For this image, the two next lines are optional as they will to lead
% basically the same binary image.
f1 = imhmax(f, 30);
f2 = imhmin(f1, 30);
bin1 = ~im2bw(f2, graythresh(f2));
bin2 = bwmorph(imopen(bin1, strel('line', 15, 90)), 'dilate');

Categories

Resources