How to desaturate an image using Emgu c# - c#

I know that we desaturate an image by decreasing the values in the Saturation channel. I want to acomplish this using c# with emgu
For instance here is c++ code using opencv to do so:
Mat Image = imread("images/any.jpg");
// Specify scaling factor
float saturationScale = 0.01;
Mat hsvImage;
// Convert to HSV color space
cv::cvtColor(Image,hsvImage,COLOR_BGR2HSV);
// Convert to float32
hsvImage.convertTo(hsvImage,CV_32F);
vector<Mat>channels(3);
// Split the channels
split(hsvImage,channels);
// Multiply S channel by scaling factor
channels[1] = channels[1] * saturationScale;
// Clipping operation performed to limit pixel values
// between 0 and 255
min(channels[1],255,channels[1]);
max(channels[1],0,channels[1]);
// Merge the channels
merge(channels,hsvImage);
// Convert back from float32
hsvImage.convertTo(hsvImage,CV_8UC3);
Mat imSat;
// Convert to BGR color space
cv::cvtColor(hsvImage,imSat,COLOR_HSV2BGR);
// Display the images
Mat combined;
cv::hconcat(Image, imSat, combined);
namedWindow("Original Image -- Desaturated Image", CV_WINDOW_AUTOSIZE);
In c# I have:
var img = new Image<Gray, byte>("images/any.jpg");
var imhHsv = img.Convert<Hsv, byte>();
var channels = imhHsv.Split();
// Multiply S channel by scaling factor and clip (limit)
channels[1] = (channels[1] * saturationScale);
I am not sure how to merge modified saturation channel with imHsv, if I do this:
CvInvoke.Merge(channels, imhHsv);
there is error:
cannot convert 'Emgu.CV.Image[]' to
'Emgu.CV.IInputArrayOfArrays'

I put a VectorOfMat into the CvInvoke.Merge and it works.
Mat[] m = new Mat[3];
m[0] = CvInvoke.CvArrToMat(channels[0]);
m[1] = CvInvoke.CvArrToMat(channels[1]);
m[2] = CvInvoke.CvArrToMat(channels[2]);
VectorOfMat vm = new VectorOfMat(m);
CvInvoke.Merge(vm, imhHsv);

Related

How To Threshold Raw Byte Image with EmguCV

i am trying to Threshold and also applying the Gaussian Blur on Raw byte image in EmguCV. I have raw array bytes, which are bytes from Dicomfile, i read those bytes and store them in a empty image, once i wanna apply threshold or gaussian blur i get these Exception:
OpenCv: src.type() == CV_8UC1.
Thank you.
var pixelData_ = dataset.GetValues<byte>(DicomTag.PixelData);
var splitPixelSpacing = pixelSpacing.Split('\\');
var IPP = dataset.GetString(DicomTag.ImagePositionPatient);
var iimg = new Image<Gray, ushort>(cols, rows);
var parse = float.Parse(splitPixelSpacing[0], CultureInfo.InvariantCulture.NumberFormat);
var newPixelSpacing = parse* (cols / 512);
iimg.Bytes = pixelData_;
iimg = iimg.Resize(512, 512, Inter.Area);
var imgBin = new Image<Gray, byte>(512, 512, new Gray(10));
var blr = iimg.ThresholdAdaptive(new Gray(100),
AdaptiveThresholdType.GaussianC,
ThresholdType.ToZero,
5,
new Gray(2));
//I set here a breakpoint then i get these Exception:src.type() == CV_8UC1.
Unfortenatly the documentation of EmguCV lacks any information about this error, and also lack any prerequisites. But emguCV is just a wrapper around OpenCV, and if we check that documentation we can find:
src Source 8-bit single-channel image.
I.e. ThresholdAdaptive is limited to 8 bit grayscale and will not work for 16 bit grayscale images.
If you just want a simple threshold you might want to use ThresholdBinary, or one of the other simple thresholding methods.

Loading and displaying a 16 (12) bit grayscale png into a PictureBox

I'm using a framework for some camera hardware called IDS Peak and we are receiving 16 bit grayscale images back from the framework, the framework itself can write the files to disk as PNGs and that's all good and well, but how do I display them in a PictureBox in Winforms?
Windows Bitmap does not support 16 bit grayscale so the following code throws a 'Parameter is not valid.' System.ArgumentException
var image = new Bitmap(width, height, stride, System.Drawing.Imaging.PixelFormat.Format16bppGrayScale, iplImg.Data());
iplImg.Data() here is an IntPtr to the bespoke Image format of the framework.
Considering Windows Bitmap does not support the format, and I can write the files using the framework to PNGs, how can I do one of the following:
Convert to a different object type other than Bitmap to display directly in Winforms without reading from the files.
Load the 16-bit grayscale PNG files into the PictureBox control (or any other control type, it doesn't have to be a PictureBox).
(1) is preferable as it doesn't require file IO but if (2) is the only possibility that's completely fine as I need to both save and display them anyway but (1) only requires a write operation and not a secondary read.
The files before writing to disc are actually monochrome with 12 bits per pixel, packed.
While it is possible to display 16-bit images, for example by hosting a wpf control in winforms, you probably want to apply a windowing function to reduce the image to 8 bit before display.
So lets use unsafe code and pointers for speed:
var bitmapData = myBitmap.LockBits(
new Rectangle(0, 0, myBitmap.Width, myBitmap.Height),
ImageLockMode.ReadWrite,
myBitmap.PixelFormat);
try
{
var ptr= (byte*)bitmapData.Scan0;
var stride = bitmapData.Stride;
var width = bitmapData.Width;
var height= bitmapData.Height;
// Conversion Code
}
finally
{
myBitmap.UnlockBits(bitmapData);
}
or using wpf image classes, that generally have better 16-bit support:
var myBitmap= new WriteableBitmap(new BitmapImage(new Uri("myBitmap.jpg", UriKind.Relative)));
writeableBitmap.Lock();
try{
var ptr = (byte*)myBitmap.BackBuffer;
...
}
finally
{
myBitmap.Unlock();
}
To loop over all the pixels you would use a double loop:
for (int y = 0; y < height; y++)
{
var row = (ushort*)(ptr+ y * stride);
for (int x = 0; x < width; x++)
{
var pixelValue = row[x];
// Scaling code
}
}
And to scale the value you could use a linear scaling between the min and max values to the 0-255 range of a byte
var slope = (byte.MaxValue + 1f) / (maxUshortValyue - minUshortValue);
var scaled = (int)(((pixelValue + 0.5f - minUshortValue) * slope)) ;
scaled = scaled > byte.MaxValue ? byte.MaxValue: scaled;
scaled = scaled < 0 ? 0: scaled;
var byteValue = (byte)scaled;
The maxUshortValyue / minUshortValue would either be computed from the max/min value of the image, or configured by the user. You would also need to create a target image in order to write down the result into a target 8-bit grayscale bitmap to be displayed, or write down the same value for each color channel in a color image.

Negative values when comparing two histograms

I am using EmguCV(c#) histogram to compare two HSV images. But sometimes I get negative values. I assumed that when I compare 2 histogram values, the value will be in the interval <0 and 1>. However, some of the values of hue or saturation are sometimes negative numbers like -0.145.
Firstly, I get byte image array, which I convert into Image<Hsv, Byte> - img1.
Image<Hsv, Byte> img1 = null;
Mat byteImageMat = new Mat();
Mat hsvMat = new Mat();
CvInvoke.Imdecode(request.ByteImage, Emgu.CV.CvEnum.ImreadModes.AnyColor, byteImageMat);
CvInvoke.CvtColor(byteImageMat, hsvMat, ColorConversion.Bgr2Hsv);
img1 = hsvMat.ToImage<Hsv, Byte>();
Then I create DenseHistogram and spliting individual channels.
DenseHistogram ComparedHistoHue = new DenseHistogram(180, new RangeF(0, 180));
DenseHistogram ComparedHistoSaturation = new DenseHistogram(256, new RangeF(0, 256));
DenseHistogram ComparedHistoBrightness = new DenseHistogram(256, new RangeF(0, 256));
Image<Gray, Byte> hueChannel = img1[0];
Image<Gray, Byte> saturationChannel = img1[1];
Image<Gray, Byte> brightnessChannel = img1[2];
After that I calculate histograms
ComparedHistoHue.Calculate(new Image<Gray, Byte>[] { hueChannel }, false, null);
ComparedHistoSaturation.Calculate(new Image<Gray, Byte>[] { saturationChannel }, false, null);
ComparedHistoBrightness.Calculate(new Image<Gray, Byte>[] { brightnessChannel }, false, null);
At this point, I loaded histogram from file which I created before and assign it into Mat (loadedMatHue, loadedMatSaturation and loadedMatBrightness).
double hue = CvInvoke.CompareHist(loadedMatHue, ComparedHistoHue, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
double satuation = CvInvoke.CompareHist(loadedMatSaturation, ComparedHistoSaturation, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
double brightnes = CvInvoke.CompareHist(loadedMatBrightness, ComparedHistoBrightness, Emgu.CV.CvEnum.HistogramCompMethod.Correl);
Can somebody tell me, why is in hue or saturation variable negative value? In my opinion and tests, there is always only one negative value at one momemnt across the double variables.
For HSV, the idea that the numbers would be between 0 and 1 is incorrect. If you want your image to have values between 0 and 1, then that image would have to be in grayscale.
In HSV, you split it up into three definitions, Hue, Saturation, and Value.
Hue is stored from 0 to 360 degrees, but can become negative if you rotate the hue past 0.
Saturation is considered from 0 to 1, i.e grayscale values. If you have negative values in this channel, disregard them, as the lowest that this value should be is 0. The same can be said for the highest value, which will be 1 since the highest value of a grayscale channel can only be one. Like I said before, its best to think of this channel in terms of grayscale from 0 to 1.
Value is very similar to saturation, the only difference being that value is considered the "lightness of the color, by the given S[saturation]" This value also can only be between 0 and 1, and any values outside of this space should be clipped.
If you want a more in depth explanation, you can check out this Stack post, which is very detailed and I thought it should be credited in this post.
If you do have to clip these values, you can always access the pixel values for each channel using some sample code below.
Image<Hsv,Byte> sampleImage = new Image<Hsv,Byte>("path\to\image");
//X and y are the pixel coordinates on an image
//Hue channel
byte hue = sampleImage.Data[y,x,0];
//Saturation channel
byte sat = sampleImage.Data[y,x,1];
//Value channel
byte val = sampleImage.Data[y,x,2];
You can throw these values inside of a loop and check if a pixel is outside the boundaries, and if it is replace it with the high or low value respectively.

EMGU: BayerGB 8bit to Image<Bgr, byte>

I have a byte array generate from a camera.
I want to convert this c# array (byte[] myBuffer) to a Emgu BgrImage.
The RAW image from the camera has 2590 x 1940 pixel.
The format is the BayerGB 8bit, so there are 2590(cols) x 1940(rows) x 2(byte) = 10059560 bytes.
I grab the data form the camera and I have a byte array with the data
byte[] myBuffer = myGrabberFuncion();
//myBuffer.lenght == 10059560
now I want to convert it to a BGR Image. I think I should use the Emgu.CV.CvInvoke.CvtColor(SRC, DEST, COLOR_CONV, CHANN ) function.
But i can't feed the function with a byte[] as the SRC argument, as it needs a "IInputArray".
So I can feed in a Mat, or a Matrix, or an Image<# , #>
But how should I put "buffer" data in to one of this structures?
If I use Matrix< byte > Bayer = new new ecv.Matrix<byte>(rows, cols, channels) I can't set 1 channel, because it would contain half the bytes of myBuffer
I can't set it to 2 channels, because an "assert" of the Opencv tells me that the color conversion CvEnum.ColorConversion.BayerGb2Bgr wants a 1-channel input image (it tells: OpenCV: scn == 1 && (dcn == 3 || dcn == 4))
I tried, as a work-around, to make a image with double width and convert this image, this don't give an error, but the output image is black (all zeroes)
byte[] buffer = myGrabberFunction();
Image<Structure.Bgr, byte> lastFrameBgr = new Image<Structure.Bgr, byte>(2590, 1942);
//test with Matrix of bytes:
ecv.Matrix<byte> bayerMatrix = new ecv.Matrix<byte>(lastFrameBgr.Height, lastFrameBgr.Rows, 2);
bayerMatrix.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerMatrix, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
//test with an 1 channel Byte depth image
Image<Structure.Gray, byte> bayerGrayImage = new Image<Structure.Gray, byte>(lastFrameBgr.Width * 2, lastFrameBgr.Height);
bayerGrayImage.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerGrayImage, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
My Question:
How can i transform my array of bytes in something i can use as source in Emgu.CV.CvInvoke.CvtColor(... ) funcion?
Or How should I do to convert am array of bytes into a Emgu Bgr Image?

How to convert Bgr pixel to Hsv pixel in EmguCV?

I'd like to convert a Bgr value (one pixel) to an Hsv value. How can I do that (without writing the conversion code from scratch) in EmguCV?
Please note that I am not interested in converting whole image's color space but only one pixel, therefore CvInvoke.cvCvtColor() does not work for me.
If you want to do this within EmguCV just read the image into a Image the get the value of the pixel and stick it in a Hsv structure.
For example:
static void Main(string[] args)
{
bool haveOpenCL = CvInvoke.HaveOpenCL;
bool haveOpenClGpu = CvInvoke.HaveOpenCLCompatibleGpuDevice;
CvInvoke.UseOpenCL = true;
Emgu.CV.Image<Bgr, Byte> lenaBgr = new Image<Bgr, Byte>(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg");
CvInvoke.Imshow("Lena BSG", lenaBgr);
Bgr color = lenaBgr[100, 100];
Console.WriteLine("Bgr: {0}", color.ToString());
Emgu.CV.Image<Hsv, Byte> lenaHsv = new Image<Hsv, Byte>(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg");
CvInvoke.Imshow("Lena HSV", lenaHsv);
Hsv colorHsv = lenaHsv[100, 100];
Console.WriteLine("HSV: {0}", colorHsv.ToString());
CvInvoke.WaitKey(0);
}
}
The result:
Bgr: [87,74,182]
HSV: [176,151,182]
Lena BGR Lena HSV
Okay I already found a way using some help from .NET framework
given a Bgr pixel ;
1- Converting the color to System.Drawing.Color :
System.Drawing.Color intermediate = System.Drawing.Color.FromArgb((int)pixel.Red, (int)pixel.Green, (int)pixel.Blue);
2- Constructing Hsv from the intermediate.
Hsv hsvPixel = new Hsv(intermediate.GetHue(), intermediate.GetSaturation(), intermediate.GetBrightness());
Cheers.

Categories

Resources