I have a byte array generate from a camera.
I want to convert this c# array (byte[] myBuffer) to a Emgu BgrImage.
The RAW image from the camera has 2590 x 1940 pixel.
The format is the BayerGB 8bit, so there are 2590(cols) x 1940(rows) x 2(byte) = 10059560 bytes.
I grab the data form the camera and I have a byte array with the data
byte[] myBuffer = myGrabberFuncion();
//myBuffer.lenght == 10059560
now I want to convert it to a BGR Image. I think I should use the Emgu.CV.CvInvoke.CvtColor(SRC, DEST, COLOR_CONV, CHANN ) function.
But i can't feed the function with a byte[] as the SRC argument, as it needs a "IInputArray".
So I can feed in a Mat, or a Matrix, or an Image<# , #>
But how should I put "buffer" data in to one of this structures?
If I use Matrix< byte > Bayer = new new ecv.Matrix<byte>(rows, cols, channels) I can't set 1 channel, because it would contain half the bytes of myBuffer
I can't set it to 2 channels, because an "assert" of the Opencv tells me that the color conversion CvEnum.ColorConversion.BayerGb2Bgr wants a 1-channel input image (it tells: OpenCV: scn == 1 && (dcn == 3 || dcn == 4))
I tried, as a work-around, to make a image with double width and convert this image, this don't give an error, but the output image is black (all zeroes)
byte[] buffer = myGrabberFunction();
Image<Structure.Bgr, byte> lastFrameBgr = new Image<Structure.Bgr, byte>(2590, 1942);
//test with Matrix of bytes:
ecv.Matrix<byte> bayerMatrix = new ecv.Matrix<byte>(lastFrameBgr.Height, lastFrameBgr.Rows, 2);
bayerMatrix.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerMatrix, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
//test with an 1 channel Byte depth image
Image<Structure.Gray, byte> bayerGrayImage = new Image<Structure.Gray, byte>(lastFrameBgr.Width * 2, lastFrameBgr.Height);
bayerGrayImage.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerGrayImage, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
My Question:
How can i transform my array of bytes in something i can use as source in Emgu.CV.CvInvoke.CvtColor(... ) funcion?
Or How should I do to convert am array of bytes into a Emgu Bgr Image?
Related
i am trying to Threshold and also applying the Gaussian Blur on Raw byte image in EmguCV. I have raw array bytes, which are bytes from Dicomfile, i read those bytes and store them in a empty image, once i wanna apply threshold or gaussian blur i get these Exception:
OpenCv: src.type() == CV_8UC1.
Thank you.
var pixelData_ = dataset.GetValues<byte>(DicomTag.PixelData);
var splitPixelSpacing = pixelSpacing.Split('\\');
var IPP = dataset.GetString(DicomTag.ImagePositionPatient);
var iimg = new Image<Gray, ushort>(cols, rows);
var parse = float.Parse(splitPixelSpacing[0], CultureInfo.InvariantCulture.NumberFormat);
var newPixelSpacing = parse* (cols / 512);
iimg.Bytes = pixelData_;
iimg = iimg.Resize(512, 512, Inter.Area);
var imgBin = new Image<Gray, byte>(512, 512, new Gray(10));
var blr = iimg.ThresholdAdaptive(new Gray(100),
AdaptiveThresholdType.GaussianC,
ThresholdType.ToZero,
5,
new Gray(2));
//I set here a breakpoint then i get these Exception:src.type() == CV_8UC1.
Unfortenatly the documentation of EmguCV lacks any information about this error, and also lack any prerequisites. But emguCV is just a wrapper around OpenCV, and if we check that documentation we can find:
src Source 8-bit single-channel image.
I.e. ThresholdAdaptive is limited to 8 bit grayscale and will not work for 16 bit grayscale images.
If you just want a simple threshold you might want to use ThresholdBinary, or one of the other simple thresholding methods.
I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.
Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.
I am trying to convert an object to an image.
I grab the object from a usb3 camera using the following:
object RawData = axActiveUSB1.GetImageWindow(0,0,608,608);
This returns a Variant (SAFEARRAY)
After reviewing further, RawData = {byte[1824, 608]}. The image is 608 x 608 so I'm guessing 1824 is 3 times the size due to the RGB component of the image.
The camera's pixel format is BayerRGB8 and according to the API I am using the data type is represented in Bytes:
Camera Pixel Format | Output Format | Data type | Dimensions
Bayer8 | 24-bit RGB | Byte | 0 to SizeX * 3 - 1, 0 to Lines - 1
I can convert it to a bytes array using this code found at Convert any object to a byte[]
private byte[] ObjectToByteArray(Object obj)
{
if (obj == null)
return null;
BinaryFormatter bf = new BinaryFormatter();
MemoryStream ms = new MemoryStream();
bf.Serialize(ms, obj);
return ms.ToArray();
}
From here, I then do: (all of this code also found or derived from info on stack)
// convert object to bytes
byte[] imgasbytes = ObjectToByteArray(RawData);
// create a bitmap and put data in it to go into the picturebox
var bitmap = new Bitmap(608, 608, PixelFormat.Format24bppRgb);
var bitmap_data = bitmap.LockBits(new Rectangle(0, 0,bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, bitmap.PixelFormat);
Marshal.Copy(imgasbytes, 0, bitmap_data.Scan0, imgasbytes.Length );
bitmap.UnlockBits(bitmap_data);
var result = bitmap as Image; // this line not even really necessary
PictureBox.Image = result;
The code works, but I should see this:
But I see this:
I've done this in Python and had similar issues which I was able to resolve, but I'm not as strong in c# and am unable to progress from here. I need to rotate my image 90 degrees, but also I think that my issue relates to incorrectly converting the array. I think that I need to convert my object (SAFEARRAY) to a multidimensional array so that the RGB sits on top of one another. I have looked at many examples on how to do this, but I do not understand how to go about it.
Any feedback is greatly appreciated on what I am doing wrong.
EDIT
I'm looking at this -> Convert RGB8 byte[] to Bitmap
which may be related to my issue.
It looks like the issue was exactly as I described.
In the end, the main issue was that the array needed to be rotated.
I found a solution here ->
Rotate M*N Matrix (90 degrees)
When I rotated the image, It resolved the picture issue that I was seeing above. While my image is inverted now, I understand the issue and as a result, am not seeing the problem any more.
Here is the code in case anyone runs into the same issue
byte[,] newImageAsBytes = new byte[ImageAsBytes.GetLength(1), ImageAsBytes.GetLength(0)];
int newColumn, newRow = 0;
for (int oldColumn = ImageAsBytes.GetLength(1) - 1; oldColumn >= 0; oldColumn--)
{
newColumn = 0;
for (int oldRow = 0; oldRow < ImageAsBytes.GetLength(0); oldRow++)
{
newImageAsBytes[newRow, newColumn] = ImageAsBytes[oldRow, oldColumn];
newColumn++;
}
newRow++;
}
byte[] b = ObjectToByteArray(newImageAsBytes);
var bitmap = new Bitmap(608, 608, PixelFormat.Format24bppRgb); // 608 is my image size and I am working with a camera that uses BayerRGB8
var bitmap_data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, bitmap.PixelFormat);
Marshal.Copy(b, 0, bitmap_data.Scan0, b.Length);
bitmap.UnlockBits(bitmap_data);
var result = bitmap as Image; // this line not even really necessary
PictureBox.Image = result;
I know that we desaturate an image by decreasing the values in the Saturation channel. I want to acomplish this using c# with emgu
For instance here is c++ code using opencv to do so:
Mat Image = imread("images/any.jpg");
// Specify scaling factor
float saturationScale = 0.01;
Mat hsvImage;
// Convert to HSV color space
cv::cvtColor(Image,hsvImage,COLOR_BGR2HSV);
// Convert to float32
hsvImage.convertTo(hsvImage,CV_32F);
vector<Mat>channels(3);
// Split the channels
split(hsvImage,channels);
// Multiply S channel by scaling factor
channels[1] = channels[1] * saturationScale;
// Clipping operation performed to limit pixel values
// between 0 and 255
min(channels[1],255,channels[1]);
max(channels[1],0,channels[1]);
// Merge the channels
merge(channels,hsvImage);
// Convert back from float32
hsvImage.convertTo(hsvImage,CV_8UC3);
Mat imSat;
// Convert to BGR color space
cv::cvtColor(hsvImage,imSat,COLOR_HSV2BGR);
// Display the images
Mat combined;
cv::hconcat(Image, imSat, combined);
namedWindow("Original Image -- Desaturated Image", CV_WINDOW_AUTOSIZE);
In c# I have:
var img = new Image<Gray, byte>("images/any.jpg");
var imhHsv = img.Convert<Hsv, byte>();
var channels = imhHsv.Split();
// Multiply S channel by scaling factor and clip (limit)
channels[1] = (channels[1] * saturationScale);
I am not sure how to merge modified saturation channel with imHsv, if I do this:
CvInvoke.Merge(channels, imhHsv);
there is error:
cannot convert 'Emgu.CV.Image[]' to
'Emgu.CV.IInputArrayOfArrays'
I put a VectorOfMat into the CvInvoke.Merge and it works.
Mat[] m = new Mat[3];
m[0] = CvInvoke.CvArrToMat(channels[0]);
m[1] = CvInvoke.CvArrToMat(channels[1]);
m[2] = CvInvoke.CvArrToMat(channels[2]);
VectorOfMat vm = new VectorOfMat(m);
CvInvoke.Merge(vm, imhHsv);
I´m trying to send a bitmap over a tcp/ip connection. So far my programm works as it should. But during debugging I discovered a strange value of my bitmap byte[].
I open a 24bit bitmap and convert it to 16bit. The bitmap is 800x600 so the byte[] length should be 800*800*2Byte = 960000Byte... But my array is 960054...
Where do the extra Bytes come from??
Console.WriteLine("Bitmap auf 16Bit anpassen...\n");
Rectangle r = new Rectangle(0,0,bitmap_o.Width, bitmap_o.Height);
Bitmap bitmap_n = bitmap_o.Clone(r, PixelFormat.Format16bppRgb555);
bitmap_n.Save("test2.bmp");
Console.WriteLine("Neue Bitmap-Eigenschaften:");
Console.WriteLine(bitmap_n.Width.ToString());
Console.WriteLine(bitmap_n.Height.ToString());
Console.WriteLine(bitmap_n.PixelFormat.ToString());
byte[] data = new byte[0];
MemoryStream mem_stream = new MemoryStream();
bitmap_n.Save(mem_stream, ImageFormat.Bmp);
data = mem_stream.ToArray();
mem_stream.Close();
Console.WriteLine(data.Length.ToString());
stream.Write(data, 0, 960000);
Console.WriteLine("Sending data...");
The extra bytes is the file header, which contains for example:
Bitmap file signature
Image dimensions (pixel size)
Bit depth
Resolution (ppi)
There can also be extra bytes within the pixel data. In your case 800 pixels at two bytes each makes for 1600 bytes per scan line, but if you had for example 145 pixels at three bytes each would make 435 bytes, so a padding byte would be added to each scan line to make it 436 that is evenly divisable by four.
Ref: BMP file format
There may be extra bytes in the bitmap array to fill the scan lines to nicer numbers.
The effective length of a scan line is called 'Stride' and you test can via the BitmapData.Stride field.
The total length of a Bitmap is calculated like this:
int size1 = bmp1Data.Stride * bmp1Data.Height;
You can have a look at a post, which uses this to create an array for the LockBits method in order to scan a whole Bitmap.