I am using EmguCV(c#) histogram to compare two HSV images. But sometimes I get negative values. I assumed that when I compare 2 histogram values, the value will be in the interval <0 and 1>. However, some of the values of hue or saturation are sometimes negative numbers like -0.145.
Firstly, I get byte image array, which I convert into Image<Hsv, Byte> - img1.
Image<Hsv, Byte> img1 = null;
Mat byteImageMat = new Mat();
Mat hsvMat = new Mat();
CvInvoke.Imdecode(request.ByteImage, Emgu.CV.CvEnum.ImreadModes.AnyColor, byteImageMat);
CvInvoke.CvtColor(byteImageMat, hsvMat, ColorConversion.Bgr2Hsv);
img1 = hsvMat.ToImage<Hsv, Byte>();
Then I create DenseHistogram and spliting individual channels.
DenseHistogram ComparedHistoHue = new DenseHistogram(180, new RangeF(0, 180));
DenseHistogram ComparedHistoSaturation = new DenseHistogram(256, new RangeF(0, 256));
DenseHistogram ComparedHistoBrightness = new DenseHistogram(256, new RangeF(0, 256));
Image<Gray, Byte> hueChannel = img1[0];
Image<Gray, Byte> saturationChannel = img1[1];
Image<Gray, Byte> brightnessChannel = img1[2];
After that I calculate histograms
ComparedHistoHue.Calculate(new Image<Gray, Byte>[] { hueChannel }, false, null);
ComparedHistoSaturation.Calculate(new Image<Gray, Byte>[] { saturationChannel }, false, null);
ComparedHistoBrightness.Calculate(new Image<Gray, Byte>[] { brightnessChannel }, false, null);
At this point, I loaded histogram from file which I created before and assign it into Mat (loadedMatHue, loadedMatSaturation and loadedMatBrightness).
double hue = CvInvoke.CompareHist(loadedMatHue, ComparedHistoHue, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
double satuation = CvInvoke.CompareHist(loadedMatSaturation, ComparedHistoSaturation, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
double brightnes = CvInvoke.CompareHist(loadedMatBrightness, ComparedHistoBrightness, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
Can somebody tell me, why is in hue or saturation variable negative value? In my opinion and tests, there is always only one negative value at one momemnt across the double variables.
For HSV, the idea that the numbers would be between 0 and 1 is incorrect. If you want your image to have values between 0 and 1, then that image would have to be in grayscale.
In HSV, you split it up into three definitions, Hue, Saturation, and Value.
Hue is stored from 0 to 360 degrees, but can become negative if you rotate the hue past 0.
Saturation is considered from 0 to 1, i.e grayscale values. If you have negative values in this channel, disregard them, as the lowest that this value should be is 0. The same can be said for the highest value, which will be 1 since the highest value of a grayscale channel can only be one. Like I said before, its best to think of this channel in terms of grayscale from 0 to 1.
Value is very similar to saturation, the only difference being that value is considered the "lightness of the color, by the given S[saturation]" This value also can only be between 0 and 1, and any values outside of this space should be clipped.
If you want a more in depth explanation, you can check out this Stack post, which is very detailed and I thought it should be credited in this post.
If you do have to clip these values, you can always access the pixel values for each channel using some sample code below.
Image<Hsv,Byte> sampleImage = new Image<Hsv,Byte>("path\to\image");
//X and y are the pixel coordinates on an image
//Hue channel
byte hue = sampleImage.Data[y,x,0];
//Saturation channel
byte sat = sampleImage.Data[y,x,1];
//Value channel
byte val = sampleImage.Data[y,x,2];
You can throw these values inside of a loop and check if a pixel is outside the boundaries, and if it is replace it with the high or low value respectively.
Related
I know that we desaturate an image by decreasing the values in the Saturation channel. I want to acomplish this using c# with emgu
For instance here is c++ code using opencv to do so:
Mat Image = imread("images/any.jpg");
// Specify scaling factor
float saturationScale = 0.01;
Mat hsvImage;
// Convert to HSV color space
cv::cvtColor(Image,hsvImage,COLOR_BGR2HSV);
// Convert to float32
hsvImage.convertTo(hsvImage,CV_32F);
vector<Mat>channels(3);
// Split the channels
split(hsvImage,channels);
// Multiply S channel by scaling factor
channels[1] = channels[1] * saturationScale;
// Clipping operation performed to limit pixel values
// between 0 and 255
min(channels[1],255,channels[1]);
max(channels[1],0,channels[1]);
// Merge the channels
merge(channels,hsvImage);
// Convert back from float32
hsvImage.convertTo(hsvImage,CV_8UC3);
Mat imSat;
// Convert to BGR color space
cv::cvtColor(hsvImage,imSat,COLOR_HSV2BGR);
// Display the images
Mat combined;
cv::hconcat(Image, imSat, combined);
namedWindow("Original Image -- Desaturated Image", CV_WINDOW_AUTOSIZE);
In c# I have:
var img = new Image<Gray, byte>("images/any.jpg");
var imhHsv = img.Convert<Hsv, byte>();
var channels = imhHsv.Split();
// Multiply S channel by scaling factor and clip (limit)
channels[1] = (channels[1] * saturationScale);
I am not sure how to merge modified saturation channel with imHsv, if I do this:
CvInvoke.Merge(channels, imhHsv);
there is error:
cannot convert 'Emgu.CV.Image[]' to
'Emgu.CV.IInputArrayOfArrays'
I put a VectorOfMat into the CvInvoke.Merge and it works.
Mat[] m = new Mat[3];
m[0] = CvInvoke.CvArrToMat(channels[0]);
m[1] = CvInvoke.CvArrToMat(channels[1]);
m[2] = CvInvoke.CvArrToMat(channels[2]);
VectorOfMat vm = new VectorOfMat(m);
CvInvoke.Merge(vm, imhHsv);
I am trying to recognize the person in video (not by his face but by his body). What I have done so far now is to find the HOG,HS and RGB histograms of a person and compare it with all other person to find him.
I am using EmguCV but OpenCV's help will also be appreciated.
HOG is Calculated using
Size imageSize = new Size(64, 16);
Size blockSize = new Size(16, 16);
Size blockStride = new Size(16, 8);
Size cellSize = new Size(8, 8);
HOGDescriptor hog = new HOGDescriptor(imageSize, blockSize, blockStride, cellSize);
float[] hogs = hog.Compute(image);
Code to Calculate HS Histograms (Same method is used for RGB)
Image<Gray, byte>[] channels = hsvImage.Copy().Split();
Image<Gray, byte> hue = channels[0];
Image<Gray, byte> sat = channels[1];
dh.Calculate<byte>(new Image<Gray, byte>[] { hue }, true, null);
dh2.Calculate<byte>(new Image<Gray, byte>[] { sat }, true, null);
float[] huehist = dh.GetBinValues();
float[] sathist = dh2.GetBinValues();
Calculating distance of 2 histograms using
double distance = 0;
for (int i = 0; i < hist1.Length; i++)
{
distance += Math.Abs(hist1[i] - hist2[i]);
}
return distance;
What is happening
I am trying to track a selected person from video feed. and the person can move from camera to camera.
The body of personA is extracted from the video frame, whose HOG,HS,RGB histograms are calculated and stored... then from next frame the histograms of all detected persons are calculated and matched with the histograms of personA the most matched histogram (with minimum distance) is considered as the same person (personA)... so it is continued to track that person...
Problem
Accuracy is not good (sometimes it tells 2 people, with very different colored cloths, same)
Suggestions
What should I change/remove
Should I calculate histograms using CvInvoke.CalcHist(...); instead of dense histograms for HS and RGB
Should I normalize histograms before calculating distances.
Is this normalization method good? (every value minus mean of array)
Or should I try something else.
Any kind of help/suggestion will be HIGHLY APPRECIATED if any more info is needed then please comment.
Thanks,
I am also working on the same project, we have same idea of work, i have solutions for this problem.
Solution 1:
after detection of the person crop the detected person extract features, save those features, next them time you wish to track the person extract feature form the frame compare them, I have wrote the whole algorithm
Solution 2
(incase you want to speed up) find the region of the person convert to binary save edges convert the whole frame to binary, find human body region
I have find other tricks for accuracy please contact me on email i have problem writing code we might work together find the best solution
I would like to get the histogram of an image using Emgu.
I have a Gray scale double image
Image<Gray, double> Crop;
I can get a histogram using
Image<Gray, byte> CropByte = Crop.Convert<Gray, byte>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<Byte>(new Image<Gray, byte>[] { CropByte }, true, null);
The problem is, doing it this way I needed to convert to a byte Image. This is problematic because it skews my results. This gives a slightly different histogram to what I would get if it were possible to use a double image.
I have tried using CvInvoke to use the internal opencv function to compute a histogram.
IntPtr[] x = { Crop };
DenseHistogram cropHist = new DenseHistogram
(
BinCount,
new RangeF
(
MinCrop,
MaxCrop
)
);
CvInvoke.cvCalcArrHist(x, cropHist, false, IntPtr.Zero);
The trouble is I'm finding it hard to find how to use this function correctly
Does emgu/opencv allow me to do this? Do I need to write the function myself?
This is not an EmguCV/OpenCV issue, the idea itself makes no sense as a double histogram would require a lot more memory than what's available. What I say is true when you have an histogram allocated with a fixed size. The only way to go around this would be to have an histogram with dynamic allocation as the image is processed. But this would be dangerous with big images as it could allocate as much memory as the image itself.
I guess that your double image contains many identical values, otherwise an histogram would not be very useful. So one way to go around this is to remap your values to a short (16-bits) instead of byte (8-bits) this way your histogram would be quite similar to what you expect from your double values.
I had a look in histogram.cpp in the opencv source code.
Inside function
void cv::calcHist( const Mat* images, int nimages, const int* channels,
InputArray _mask, OutputArray _hist, int dims, const int* histSize,
const float** ranges, bool uniform, bool accumulate )
There is a section which handles different image types
if( depth == CV_8U )
calcHist_8u(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_16U )
calcHist_<ushort>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else if( depth == CV_32F )
calcHist_<float>(ptrs, deltas, imsize, ihist, dims, ranges, _uniranges, uniform );
else
CV_Error(CV_StsUnsupportedFormat, "");
While double images are not handled here yet, float is.
While floating point looses a little bit of precision from double, its not a significant problem.
The following code snippet worked well for me
Image<Gray, float> CropByte = Crop.Convert<Gray, float>();
DenseHistogram hist = new DenseHistogram(BinCount, new RangeF(0.0f, 255.0f));
hist.Calculate<float>(new Image<Gray, float>[] { CropByte }, true, null);
Hi am using HoughLines Method to detect lines from a camera, i've filtered my image "imgProcessed" using ROI it means getting just the black objects to make the tracking simple, then when i intend to use the HoughLines method it gives me an error that my "CannyEdges" has some invalid arguments, here's my code :
Image<Gray, Byte> gray = imgProcessed.Convert<Gray, Byte>().PyrDown().PyrUp();
Gray cannyThreshold = new Gray(180);
Gray cannyThresholdLinking = new Gray(120);
Gray circleAccumulatorThreshold = new Gray(120);
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
LineSegment2D[] lines = imgProcessed.cannyEdges.HoughLines(
cannyThreshold,
cannyThresholdLinking,
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
I've edited my code & it works, i've devised it in two parts : the first part where i've detected the canny edges rather then use the HoughLines method to detect it automatically, and the 2nd where i've used the HoughLinesBinary method rather than HoughLines with less arguments, here is the code :
Image<Gray, Byte> gray1 = imgProcessed.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyGray = gray1.Canny(120, 180);
imgProcessed = cannyGray;
LineSegment2D[] lines = imgProcessed.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
I have an image and I want to find the most dominant lowest hue colour and the most dominant highest hue colour from an image.
It is possible that there are several colours/hues that are close to each other in populace to be dominant and if that is the case I would need to take an average of the most popular.
I am using emgu framework here.
I load an image into the HSV colour space.
I split the hue channel away from the main image.
I then use the DenseHistogram to return my ranges of 'buckets'.
Now, I could enumerate through the bin collection to get what I want but I am mindful of conserving memory when and wherever I can.
So, is there a way of getting what I need at all from the DenseHistogram 'object'?
i have tried MinMax (as shown below) and I have consider using linq but not sure if that is expensive to use and/or how to use it with just using a float array.
This is my code so far:
float[] GrayHist;
Image<Hsv, Byte> hsvSample = new Image<Hsv, byte>("An image file somewhere");
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Histo.Calculate(new Image<Gray, Byte>[] { hsvSample[0] }, true, null);
GrayHist = new float[256];
Histo.MatND.ManagedArray.CopyTo(GrayHist, 0);
float mins;
float maxs;
int[] minVals;
int[] maxVals;
Histo.MinMax(out mins, out maxs, out minVals, out maxVals); //only gets lowest and highest and not most popular
List<float> ranges= GrayHist.ToList().OrderBy( //not sure what to put here..