How to transfer an image from C++ to Unity3D using OpenCV? - c#

I have a trouble in transferring an image from C++ to Unity3D(C#). I used OpenCV to read a Mat type image and get the data point with img.data. When I build the dll and call the corresponding function, I get error with all data in the bytes array is 0, so I want to know how to transfer an image from C++ to C#. Thank you.
Sorry for late reply,
code shown as the follows:
C++ code:
DLL_API BYTE* _stdcall MatToByte()
{
Mat srcImg = imread("F:\\3121\\image\\image_0\\000000.png");
nHeight = srcImg.rows;
nWidth = srcImg.cols;
nBytes = nHeight * nWidth * nFlag / 8;
if (pImg)
delete[] pImg;
pImg = new BYTE[nBytes];
memcpy(pImg, srcImg.data, nBytes);
return pImg;
}
C# code
[DllImport("Display")]
public static extern IntPtr MatToByte();
byte[] Bytes = new byte[nBytes];
IntPtr tmp = MatToByte();
Marshal.Copy(tmp, Bytes, 0, nBytes);
Texture2D texture = new Texture2D(width, height);
texture.LoadImage(Bytes);
left_plane.GetComponent<Renderer>().material.mainTexture = texture;

Related

Pass char* argument containing image data to C/C++ DLL (Glfw)

I want to pass image data to a C++ DLL function that looks like this:
GLFWcursor* glfwCreateCursor (
const GLFWimage* image,
int xhot,
int yhot
)
It crashes supposedly because of this struct (C#):
[StructLayout(LayoutKind.Sequential)]
public struct GlfwImage
{
public int width;
public int height;
public byte[] pixels;
}
"pixels" is of type "unsigned char *", width and height are simple int-values.
While the method can be load with that signature, I always
get a System.AccessViolationException when actually calling it.
I tried several datatypes for "pixels", including IntPtr and
actual pointers but to no effect.
Here is how I get the data and call it:
var bufferSize = texture.Size.Width * texture.Size.Height * 4;
IntPtr imageData = Marshal.AllocHGlobal(bufferSize);
GL.GetTexImage(TextureTarget.Texture2D, 0, PixelFormat.Rgba, PixelType.Bitmap, imageData);
byte[] imageChars = new byte[bufferSize];
Marshal.Copy(imageData, imageChars, 0, bufferSize);
GlfwImage cursorImage = new GlfwImage
{
pixels = imageChars,
width = texture.Size.Width,
height = texture.Size.Height
};
GlfwCursor cursor = Glfw.CreateCursor(cursorImage, texture.Size.Width / 2, texture.Size.Height / 2);
Is there something I'm overlooking?

Resizing the image in C# and sending it to OpenCV results in distorted image

This is a follow-up question related to this one. Basically, I have a DLL which uses OpenCV to do image manipulation. There are two methods, one accepting an image-Path, and the other one accepting a cv::Mat. The one working with image-path works fine. The one that accepts an image is problematic.
Here is the method which accepts the filename (DLL):
CDLL2_API void Classify(const char * img_path, char* out_result, int* length_of_out_result, int N)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::imread(img_path);
cv::imshow("img recieved from c#", img);
std::vector<PredictionResults> result = classifier->Classify(std::string(img_path), N);
std::string str_info = "";
//...
*length_of_out_result = ss.str().length();
}
Here is the method which accepts the image (DLL):
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width,
int step, char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, step);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
Here is the code in C# application: DllImport:
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, int step, byte[] out_result, out int out_result_length, int top_n_results = 2);
The method which sends the image to the DLL:
private string Classify_UsingImage(Bitmap img, int top_n_results)
{
byte[] res = new byte[200];
int len;
BitmapData bmpData;
bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height), ImageLockMode.ReadOnly, img.PixelFormat);
Classify_Image(bmpData.Scan0, (uint)bmpData.Height, (uint)bmpData.Width, bmpData.Stride, res, out len, top_n_results);
//Remember to unlock!!!
img.UnlockBits(bmpData);
string s = ASCIIEncoding.ASCII.GetString(res);
return s;
}
Now, this works well when I send an image to the DLL. if I use imshow() to show the received image, the image is shown just fine.
The Actual Problem:
However, when I resize the very same image and send it using the very same method above, the image is distorted.
I need to add that, If I resize an image using the given C# method below, then save it, and then pass the filename to the DLL to be opened using Classify(std::string(img_path), N); it works perfectly.
Here is the screenshot showing an example of this happening:
Image sent from C` without being resized:
When The same image is first resized and then sent to the DLL:
Here the image is first resized (in C#), saved to the disk and then its filepath sent to the DLL:
This is the snippet responsible for resizing (C#):
/// <summary>
/// Resize the image to the specified width and height.
/// </summary>
/// <param name="image">The image to resize.</param>
/// <param name="width">The width to resize to.</param>
/// <param name="height">The height to resize to.</param>
/// <returns>The resized image.</returns>
public static Bitmap ResizeImage(Image image, int width, int height)
{
var destRect = new Rectangle(0, 0, width, height);
var destImage = new Bitmap(width, height);
destImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (var graphics = Graphics.FromImage(destImage))
{
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
using (var wrapMode = new ImageAttributes())
{
wrapMode.SetWrapMode(WrapMode.TileFlipXY);
graphics.DrawImage(image, destRect, 0, 0, image.Width, image.Height, GraphicsUnit.Pixel, wrapMode);
}
}
return destImage;
}
and this is the original image
This is the Classify method which uses the filepath to read the images:
std::vector<PredictionResults> Classifier::Classify(const std::string & img_path, int N)
{
cv::Mat img = cv::imread(img_path);
cv::Mat resizedImage;
std::vector<PredictionResults> results;
std::vector<float> output;
// cv::imshow((std::string("img classify by path") + type2str(img.type())), img);
if (IsResizedEnabled())
{
ResizeImage(img, resizedImage);
output = Predict(resizedImage);
}
else
{
output = Predict(img);
img.release();
}
N = std::min<int>(labels_.size(), N);
std::vector<int> maxN = Argmax(output, N);
for (int i = 0; i < N; ++i)
{
int idx = maxN[i];
PredictionResults r;
r.label = labels_[idx];
r.accuracy = output[idx];
results.push_back(r);
}
return results;
}
And this is the ResizeImage used in the method above :
void Classifier::ResizeImage(const cv::Mat & source_image, cv::Mat& resizedImage)
{
Size size(GetResizeHeight(), GetResizeHeight());
cv::resize(source_image, resizedImage, size);//resize image
CHECK(!resizedImage.empty()) << "Unable to decode image ";
}
Problem 2:
Distortion after resizing aside, I am facing a discrepancy between resizing in C# and resizing using OpenCV itself.
I have created another method using EmguCV (also given below) and passed the needed information and did not face any such distortions which happen when we resize the image in C# and send it to the DLL.
However, this discrepancy made me want to understand what is causing these issues.
Here is the method which uses EmguCV.Mat is the code that works irrespective of resizing:
private string Classify_UsingMat(string imgpath, int top_n_results)
{
byte[] res = new byte[200];
int len;
Emgu.CV.Mat img = new Emgu.CV.Mat(imgpath, ImreadModes.Color);
if (chkResizeImageCShap.Checked)
{
CvInvoke.Resize(img, img, new Size(256, 256));
}
Classify_Image(img.DataPointer, (uint)img.Height, (uint)img.Width, img.Step, res, out len, top_n_results);
string s = ASCIIEncoding.ASCII.GetString(res);
return s;
}
Why do I care?
Because, I get a different accuracy when I use OpenCV resize (both when I use EMguCV's CvInvoke.resize() and cv::resize()) than what I get from resizing the image in C#, saving it to disk and send the image path to the openCV.
So I either need to fix the distortion happening when I deal with images in C#, or I need to understand why the resizing in OpenCV has different results than the C# resizing.
So to summarize issues and points made so far:
All situations intact, If we resize the image inside C# application,
and pass the info normally as we did before, the image will be
distorted (example given above)
If we resize the image, save it to the disk, and give its filename
to the OpenCV to create a new cv::Mat, it works perfectly without any issues.
If I use EmugCV and instead of working with Bitmap, use
Emug.CV.Mat and send the needed parameters using mat object from
C#, no distortion happens.
However, the accuracy I get from a resized image from C# (see #2), is different than the one I get from the resized image using OpenCV.
This doesn't make any difference if I resize the image before hand using
CvInvoke.Resize() from C#, and send the resulting image to the DLL,
or send the original image (using EmguCV) and resizing it in the C++ code using cv::resize(). This is what prevents me from using the EmguCV or passing the image original image and resizing it inside the DLL using OpenCV.
Here are the images with different results, showing the issues:
--------------------No Resizing------------------------------
1.Using Bitmap-No Resize, =>safe, acc=580295
2.Using Emgu.Mat-No Resize =>safe, acc=0.580262
3.Using FilePath-No Resize, =>safe, acc=0.580262
--------------------Resize in C#------------------------------
4.Using Bitmap-CSharp Resize, =>unsafe, acc=0.770425
5.Using Emgu.Mat-EmguResize, =>unsafe, acc=758335
6.**Using FilePath-CSharp Resize, =>unsafe, acc=0.977649**
--------------------Resize in DLL------------------------------
7.Using Bitmap-DLL Resize, =>unsafe, acc=0.757484
8.Using Emgu.DLL Resize, =>unsafe, acc=0.758335
9.Using FilePath-DLL Resize, =>unsafe, acc=0.758335
I need to get the accuracy which I get at #6. as you can see the EmguCV resize and also the OpenCV resize function used in the DLL, act similar and don't work as expected (i.e. like #2)!, The C# resize method applied on the image is problematic, while if it is resized, saved and the filename passed, the result will be fine.
You can see the screen shots depicting different scenarios here: http://imgur.com/a/xbgIQ
What I did was to use imdecode as EdChum suggested.
This is how the functions in the DLL and C# look now:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "classification.h"
extern "C"
{
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
The actual method:
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
vector<unsigned char> inputImageBytes(img_pointer, img_pointer + data_len);
cv::Mat img = imdecode(inputImageBytes, CV_LOAD_IMAGE_COLOR);
cv::imshow("img just recieved from c#", img);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
Here is the C# DllImport:
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(byte[] img, long data_len, byte[] out_result, out int out_result_length, int top_n_results = 2);
and this is the actual method sending the image back to the DLL:
private string Classify_UsingImage(Bitmap image, int top_n_results)
{
byte[] result = new byte[200];
int len;
Bitmap img;
if (chkResizeImageCShap.Checked)
img = ResizeImage(image, int.Parse(txtWidth.Text), (int.Parse(txtHeight.Text)));
else
img = image;
//this is for situations, where the image is not read from disk, and is stored in the memort(e.g. image comes from a camera or snapshot)
ImageFormat fmt = new ImageFormat(image.RawFormat.Guid);
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().FirstOrDefault(codec => codec.FormatID == image.RawFormat.Guid);
if (imageCodecInfo == null)
{
fmt = ImageFormat.Jpeg;
}
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms, fmt);
byte[] image_byte_array = ms.ToArray();
Classify_Image(image_byte_array, ms.Length, result, out len, top_n_results);
}
return ASCIIEncoding.ASCII.GetString(result);
}
By doing this after resizing the image from C#, we don't face any distortions at all.
I couldn't, however, figure out why the resize on OpenCV part wouldn't work as expected!

How do I send an image from C# to a DLL accepting OpenCV Mat properly?

I have a C++ class for which I created a C DLL to be used in our C# solution. Now I need to send images to the DLL, but I don't know the proper way of doing this. This is the C++ function signature:
std::vector<Prediction> Classify(const cv::Mat& img, int N = 2);
And this is how I went about it. Currently I tried to create this wrapper method in the DLL:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "../classification.h"
extern "C"
{
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
Code in the dll:
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_32FC3, (void*)img_pointer);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//misc code...
*length_of_out_result = ss.str().length();
}
and in the C# code I wrote :
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_image(byte[] img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
private string Classify(Bitmap img, int top_n_results)
{
byte[] result = new byte[200];
int len;
var img_byte = (byte[])(new ImageConverter()).ConvertTo(img, typeof(byte[]));
Classify_image(img_byte, (uint)img.Height, (uint)img.Width,res, out len, top_n_results);
return ASCIIEncoding.ASCII.GetString(result);
}
but whenever I try to run the code, I get Access violation error:
An unhandled exception of type 'System.AccessViolationException'
occurred in Classification Using dotNet.exe
Additional information: Attempted to read or write protected memory.
This is often an indication that other memory is corrupt.
and exception error says:
{"Attempted to read or write protected memory. This is often an
indication that other memory is corrupt."}
Deeper investigation into the code made it clear that, I get the exception error in this function:
void Classifier::Preprocess(const cv::Mat& img, std::vector<cv::Mat>* input_channels)
{
/* Convert the input image to the input image format of the network. */
cv::Mat sample;
if (img.channels() == 3 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);
else if (img.channels() == 4 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);
else if (img.channels() == 4 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);
else if (img.channels() == 1 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);
else
sample = img;
//resize image according to the input
cv::Mat sample_resized;
if (sample.size() != input_geometry_)
cv::resize(sample, sample_resized, input_geometry_);
else
sample_resized = sample;
cv::Mat sample_float;
if (num_channels_ == 3)
sample_resized.convertTo(sample_float, CV_32FC3);
else
sample_resized.convertTo(sample_float, CV_32FC1);
cv::Mat sample_normalized;
cv::subtract(sample_float, mean_, sample_normalized);
/* This operation will write the separate BGR planes directly to the
* input layer of the network because it is wrapped by the cv::Mat
* objects in input_channels. */
cv::split(sample_normalized, *input_channels);
CHECK(reinterpret_cast<float*>(input_channels->at(0).data)
== net_->input_blobs()[0]->cpu_data())
<< "Input channels are not wrapping the input layer of the network.";
}
The access violation occurs when it is attempted to resize the image meaning running this snippet:
//resize image according to the input
cv::Mat sample_resized;
if (sample.size() != input_geometry_)
cv::resize(sample, sample_resized, input_geometry_);
Further investigation and debugging (here) made the culprit visible!
This approach proved to be plain wrong, or at the very least buggy. Using this code, the image on the C++ side seemed to have been initialized properly, the number of channels, the height, and width all seemed fine.
But the moment you attempt to use the image, either by resizing it or even showing it using imshow(), it would crash the application and be giving an access violation exception, the very same error that happened when resizing and is posted in the question.
Looking at this answer, I changed the C# code responsible for handing the image to the dll. the new code is as follows :
//Dll import
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
//...
//main code
Bitmap img = new Bitmap(txtImagePath.Text);
BitmapData bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
result = Classify_UsingImage(bmpData, 1);
img.UnlockBits(bmpData); //Remember to unlock!!!
and the C++ code in the DLL:
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, Mat::AUTO_STEP);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
And doing so rectified all access violations I was getting previously.
Although I can now easily send images from C# to the DLL, I have some issue with the current implementation of mine. I don't know how to send the OpenCV type from C# to the needed function, currently I'm using a hardcoded image type as you can see, and this begs the question, what should I do when my input image is grayscale or even a png with 4 channels?
After Trying many different approaches, I guess it would be beneficial to other people who seek to the the same thing, to know this.
To cut a very long story short (see this question), The best way that I could find is this (as #EdChum says it) :
I'd pass the file as memory to your openCV dll, this should be able
to call imdecode which will sniff the file type, additionally you can
pass the flag
And also explained here to send a pointer to the DLL and there use imdecode to decode the image. This solved a lot of issues other approaches introduced. And will also save you alot of headaches.
Here is the code of intrest:
This is how my functions in the DLL and C# should have looked :
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "classification.h"
extern "C"
{
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
The actual method :
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
vector<unsigned char> inputImageBytes(img_pointer, img_pointer + data_len);
cv::Mat img = imdecode(inputImageBytes, CV_LOAD_IMAGE_COLOR);
cv::imshow("img just recieved from c#", img);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
Here is the C# Dll Import:
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(byte[] img, long data_len, byte[] out_result, out int out_result_length, int top_n_results = 2);
and This is the actual method sending the image back to the DLL:
private string Classify_UsingImage(Bitmap image, int top_n_results)
{
byte[] result = new byte[200];
int len;
Bitmap img;
if (chkResizeImageCShap.Checked)
img = ResizeImage(image, int.Parse(txtWidth.Text), (int.Parse(txtHeight.Text)));
else
img = image;
ImageFormat fmt = new ImageFormat(image.RawFormat.Guid);
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().FirstOrDefault(codec => codec.FormatID == image.RawFormat.Guid);
//this is for situations, where the image is not read from disk, and is stored in the memort(e.g. image comes from a camera or snapshot)
if (imageCodecInfo == null)
{
fmt = ImageFormat.Jpeg;
}
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms,fmt);
byte[] image_byte_array = ms.ToArray();
Classify_Image(image_byte_array, ms.Length, result, out len, top_n_results);
}
return ASCIIEncoding.ASCII.GetString(result);
}

MatchTemplate image with image converted to BYTE pointer in OpenCV

I'm loading C++ library from my C# code dynamically. I want to find small image inside large one, converting large image to byte[] and small image read from physical path. When I call imdecode then large_img always returns 0 cols and rows.
C#
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate ImageParams GetImageParams(IntPtr dataPtr, int size, string path);
// ...
byte[] largeImgByteArr = this.BitmapToByteArray(bmp);
IntPtr dataPtr = Marshal.AllocHGlobal(largeImgByteArr.Length);
Marshal.Copy(dataPtr, largeImgByteArr, 0, largeImgByteArr.Length);
C++
ImageParams GetImageParams(BYTE* largeImgBuf, int bufLength, const char* smallImgPath)
{
Mat large_img_data(bufLength, 1, CV_32FC1, largeImgBuf);
Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat small_img = imread(smallImgPath, IMREAD_COLOR);
int result_cols = large_img.cols - small_img.cols + 1;
int result_rows = large_img.rows - small_img.rows + 1;
Mat result;
result.create(result_cols, result_rows, CV_32FC1);
matchTemplate(large_img, small_img, result, CV_TM_SQDIFF_NORMED);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
}
What I'm doing wrong here?
Note: I have checked that image path is correct and byte array not empty.
Edit 1
I changed my code a bit by providing the large image width and height, also got rid of imdecode and changed something like in this post.
ImageParams GetImageParams(BYTE* largeImgBuf, int height, int width, int bufLength, const char* smallImgPath)
{
// Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat large_img = Mat(height, width, CV_8UC3, largeImgBuf);
Mat small_img = imread(templPath, 1);
/// ...
}
Now it returns rows and columns but when call matchTemplate method it throws an exception:
Remember that Bitmap structure in C# uses data padding (stride value as multiplicity of 4) whrereas OpenCV may not. Try creating Mat object directly from byte array but adjust step (stride) value as it just assigns data without any ownership or reallocation.
EDIT
Here's an example how create OpenCV Mat object from Bitmap. Data from bitmap is not copied, but only assigned. PixelFormat and OpenCV mat type must have corresponding single element size in bytes.
cv::Mat ImageBridge::cvimage(System::Drawing::Bitmap^ bitmap){
if(bitmap != nullptr){
switch(bitmap->PixelFormat){
case System::Drawing::Imaging::PixelFormat::Format24bppRgb:
return bmp2mat(bitmap, CV_8UC3);
case System::Drawing::Imaging::PixelFormat::Format8bppIndexed:
return bmp2mat(bitmap, CV_8U);
default: return cv::Mat();
}
}
else
return cv::Mat();
}
cv::Mat ImageBridge::bmp2mat(System::Drawing::Bitmap^ bitmap, int image_type){
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
bitmap->PixelFormat);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
cv::Mat image(
cv::Size(bitmap->Width, bitmap->Height),
image_type,
bmpptr,
bitmap_data->Stride);
bitmap->UnlockBits(bitmap_data);
return image;
}
EDIT 2
Conversion in reverse - this time data from Mat image is copied as Bitmap allocates it's own memory.
System::Drawing::Bitmap^ ImageBridge::bitmap(cv::Mat& image){
if(!image.empty() && image.type() == CV_8UC3)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format24bppRgb);
else if(!image.empty() && image.type() == CV_8U)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format8bppIndexed);
else
return nullptr;
}
System::Drawing::Bitmap^ ImageBridge::mat2bmp(cv::Mat& image, System::Drawing::Imaging::PixelFormat pixel_format){
if(image.empty())
return nullptr;
System::Drawing::Bitmap^ bitmap = gcnew System::Drawing::Bitmap(image.cols, image.rows, pixel_format);
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
pixel_format);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
int line_length = (int)image.step;
int bmp_stride = bitmap_data->Stride;
assert(!image.isSubmatrix());
assert(bmp_stride >= 0);
for(int l = 0; l < image.rows; l++){
char* cvptr = (char*)image.ptr(l);
int bmp_line_index = l * bmp_stride;
for(int i = 0; i < line_length; ++i)
bmpptr[bmp_line_index + i] = cvptr[i];
}
bitmap->UnlockBits(bitmap_data);
return bitmap;
}
Or if you have Mat image with step as multiplicity of 4 you can use non-copy version.
System::Drawing::Bitmap^ ImageBridge::bitmap2(cv::Mat& image){
System::Drawing::Bitmap^ bitmap;
assert(!image.isSubmatrix());
if(!image.empty() && image.type() == CV_8UC3){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
3 * image.cols,
System::Drawing::Imaging::PixelFormat::Format24bppRgb,
System::IntPtr(image.data));
}
else if(!image.empty() && image.type() == CV_8U){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
image.cols,
System::Drawing::Imaging::PixelFormat::Format8bppIndexed,
System::IntPtr(image.data));
}
return bitmap;
}
According to the documentation
The function reads an image from the specified buffer in the memory. If the buffer is too short or contains invalid data, the empty matrix/image is returned.
Try checking the error code after the imdecode part
#include <errno.h>
cout << errno;

WinApi - Byte Array to Gray 8-bit Bitmap (+Performance)

I have a byte array that needs to be displayed on the desktop (or Form). I'm using WinApi for that and not sure how to set all pixels at once. The byte array is in my memory and needs to be displayed as quickly as possible (with just WinApi).
I'm using C# but simple pseudo-code would be ok for me:
// create bitmap
byte[] bytes = ...;// contains pixel data, 1 byte per pixel
HDC desktopDC = GetWindowDC(GetDesktopWindow());
HDC bitmapDC = CreateCompatibleDC(desktopDC);
HBITMAP bitmap = CreateCompatibleBitmap(bitmapDC, 320, 240);
DeleteObject(SelectObject(bitmapDC, bitmap));
BITMAPINFO info = new BITMAPINFO();
info.bmiColors = new tagRGBQUAD[256];
for (int i = 0; i < info.bmiColors.Length; i++)
{
info.bmiColors[i].rgbRed = (byte)i;
info.bmiColors[i].rgbGreen = (byte)i;
info.bmiColors[i].rgbBlue = (byte)i;
info.bmiColors[i].rgbReserved = 0;
}
info.bmiHeader = new BITMAPINFOHEADER();
info.bmiHeader.biSize = (uint) Marshal.SizeOf(info.bmiHeader);
info.bmiHeader.biWidth = 320;
info.bmiHeader.biHeight = 240;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biBitCount = 8;
info.bmiHeader.biCompression = BI_RGB;
info.bmiHeader.biSizeImage = 0;
info.bmiHeader.biClrUsed = 256;
info.bmiHeader.biClrImportant = 0;
// next line throws wrong parameter exception all the time
// SetDIBits(bitmapDC, bh, 0, 240, Marshal.UnsafeAddrOfPinnedArrayElement(info.bmiColors, 0), ref info, DIB_PAL_COLORS);
// how do i store all pixels into the bitmap at once ?
for (int i = 0; i < bytes.Length;i++)
SetPixel(bitmapDC, i % 320, i / 320, random(0x1000000));
// draw the bitmap
BitBlt(desktopDC, 0, 0, 320, 240, bitmapDC, 0, 0, SRCCOPY);
When I just try to set each pixel by itself with SetPixel() I see a monochrome image without gray colors only black and white. How can I correctly create a gray scale bitmap for displaying ? And how do I do that quick ?
Update:
The call ends up in an error outside of my program in WinApi. Can't catch exception:
public const int DIB_RGB_COLORS = 0;
public const int DIB_PAL_COLORS = 1;
[DllImport("gdi32.dll")]
public static extern int SetDIBits(IntPtr hdc, IntPtr hbmp, uint uStartScan, uint cScanLines, byte[] lpvBits, [In] ref BITMAPINFO lpbmi, uint fuColorUse);
// parameters as above
SetDIBits(bitmapDC, bitmap, 0, 240, bytes, ref info, DIB_RGB_COLORS);
Two of the SetDIBits parameters are wrong:
lpvBits - this is the image data but you're passing the palette data. You should be passing your bytes array.
lpBmi - this is OK - the BITMAPINFO structure contains both the BITMAPINFOHEADER and the palette so you don't need to pass the palette separately. My answer to your other question describes how to declare the structure.
fuColorUse - this describes the format of the palette. You are using an RGB palette so you should pass DIB_RGB_COLORS.

Categories

Resources