Passing pointer from C# to C++ - c#

I am trying to pass a 2D mask (all 0s, expect for a region of interest as 1s) from C# ( as short[]) to C++ (as unsigned short*), but I cannot get the right value in C++.
C#
[DllImport("StatsManager.dll", EntryPoint = "SetStatsMask")]
private static extern int SetStatsMask(IntPtr mask, int imgWidth, int imgHeight);
short[] mask;
mask = new short[8*8];
// some operation here making a ROI in mask all 1. ex 0000111100000000 in 1D
IntPtr maskPtr = Marshal.AllocHGlobal(2 * mask.Length);
Marshal.Copy(mask, 0, maskPtr, mask.Length);
SetStatsMask(maskPtr, width, height);
C++
long StatsManager::SetStatsMask(unsigned short *mask, long width, long height)
{
//create memory to store the incoming mask
//memcpy the mask to the new buffer
//pMask = realloc(pMask,width*height*sizeof(unsigned short));
long ret = TRUE;
if (NULL == _pMask)
{
_pMask = new unsigned short[width * height];
}
else
{
realloc(_pMask,width*height*sizeof(unsigned short));
}
memcpy(mask,_pMask,width*height*sizeof(unsigned short));
SaveBuffer(_pMask, width, height);
return ret;
}
But all I can see for mask in C++ using watch window is 52536 instead of 0000111100000000, so I am wondering where I messed up? Anyone can help? Thanks.

I believe you misplaced the parameters of memcpy:
memcpy(mask,_pMask,width*height*sizeof(unsigned short));
As I understand you want to copy from mask to _pMask, so you should write:
memcpy(_pMask, mask, width*height*sizeof(unsigned short));

Related

Getting the screen pixels as byte array

I need to change my screen capture code to get a pixel array instead of a Bitmap.
I change the code to this:
BitBlt > Image.FromHbitmap(pointer) > LockBits > pixel array
But, I'm checking if it's possible to cut some middle man, and have something like this:
BitBlt > Marshal.Copy > pixel array
Or even:
WinApi method that gets the screen region as a pixel array
So far, I tried to use this code, without success:
public static byte[] CaptureAsArray(Size size, int positionX, int positionY)
{
var hDesk = GetDesktopWindow();
var hSrce = GetWindowDC(hDesk);
var hDest = CreateCompatibleDC(hSrce);
var hBmp = CreateCompatibleBitmap(hSrce, (int)size.Width, (int)size.Height);
var hOldBmp = SelectObject(hDest, hBmp);
try
{
new System.Security.Permissions.UIPermission(System.Security.Permissions.UIPermissionWindow.AllWindows).Demand();
var b = BitBlt(hDest, 0, 0, (int)size.Width, (int)size.Height, hSrce, positionX, positionY, CopyPixelOperation.SourceCopy | CopyPixelOperation.CaptureBlt);
var length = 4 * (int)size.Width * (int)size.Height;
var bytes = new byte[length];
Marshal.Copy(hBmp, bytes, 0, length);
//return b ? Image.FromHbitmap(hBmp) : null;
return bytes;
}
finally
{
SelectObject(hDest, hOldBmp);
DeleteObject(hBmp);
DeleteDC(hDest);
ReleaseDC(hDesk, hSrce);
}
return null;
}
This code gives me an System.AccessViolationException while stepping on Marshal.Copy.
Is there any more efficient way of getting screen pixels as a byte array while using BitBlt or similar screen capture methods?
EDIT:
As found in here and as suggested by CodyGray, I should use
var b = Native.BitBlt(_compatibleDeviceContext, 0, 0, Width, Height, _windowDeviceContext, Left, Top, Native.CopyPixelOperation.SourceCopy | Native.CopyPixelOperation.CaptureBlt);
var bi = new Native.BITMAPINFOHEADER();
bi.biSize = (uint)Marshal.SizeOf(bi);
bi.biBitCount = 32;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
bi.biCompression = 0;
bi.biHeight = Height;
bi.biWidth = Width;
bi.biPlanes = 1;
var data = new byte[4 * Width * Height];
Native.GetDIBits(_windowDeviceContext, _compatibleBitmap, 0, (uint)Height, data, ref bi, Native.DIB_Color_Mode.DIB_RGB_COLORS);
My data array has all the pixels of the screenshot.
Now, I'm going to test if there's any performance improvements or not.
Yeah, you can't just start accessing the raw bits of a BITMAP object through an HBITMAP (as returned by CreateCompatibleBitmap). HBITMAP is just a handle, as the name suggests. It's not a pointer in the classic "C" sense that it points to the beginning of the array. Handles are like indirect pointers.
GetDIBits is the appropriate solution to get the raw, device-independent pixel array from a bitmap that you can iterate through. But you'll still need to use the code you have to get the screen bitmap in the first place. Essentially, you want something like this. Of course, you'll need to translate it into C#, but that shouldn't be difficult, since you already know how to call WinAPI functions.
Note that you do not need to call GetDesktopWindow or GetWindowDC. Just pass NULL as the handle to GetDC; it has the same effect of returning a screen DC, which you can then use to create a compatible bitmap. In general, you should almost never call GetDesktopWindow.

Pass char* argument containing image data to C/C++ DLL (Glfw)

I want to pass image data to a C++ DLL function that looks like this:
GLFWcursor* glfwCreateCursor (
const GLFWimage* image,
int xhot,
int yhot
)
It crashes supposedly because of this struct (C#):
[StructLayout(LayoutKind.Sequential)]
public struct GlfwImage
{
public int width;
public int height;
public byte[] pixels;
}
"pixels" is of type "unsigned char *", width and height are simple int-values.
While the method can be load with that signature, I always
get a System.AccessViolationException when actually calling it.
I tried several datatypes for "pixels", including IntPtr and
actual pointers but to no effect.
Here is how I get the data and call it:
var bufferSize = texture.Size.Width * texture.Size.Height * 4;
IntPtr imageData = Marshal.AllocHGlobal(bufferSize);
GL.GetTexImage(TextureTarget.Texture2D, 0, PixelFormat.Rgba, PixelType.Bitmap, imageData);
byte[] imageChars = new byte[bufferSize];
Marshal.Copy(imageData, imageChars, 0, bufferSize);
GlfwImage cursorImage = new GlfwImage
{
pixels = imageChars,
width = texture.Size.Width,
height = texture.Size.Height
};
GlfwCursor cursor = Glfw.CreateCursor(cursorImage, texture.Size.Width / 2, texture.Size.Height / 2);
Is there something I'm overlooking?

Getting distorted images after sending them from C# to OpenCV in C++?

I created a C DLL out of my C++ class which uses OpenCV for image manipulations and want to use this DLL in my C# application. Currently, this is how I have implemented it:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "../classification.h"
extern "C"
{
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
C# related code:
DLL Import section:
//Dll import
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
The actual function sending the image to the DLL:
//...
//main code
private string Classify(int top_n)
{
byte[] res = new byte[200];
int len;
Bitmap img = new Bitmap(txtImagePath.Text);
BitmapData bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
Classify_Image(bmpData.Scan0, (uint)bmpData.Height, (uint)bmpData.Width, res, out len, top_n);
img.UnlockBits(bmpData); //Remember to unlock!!!
//...
}
and the C++ code in the DLL :
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, Mat::AUTO_STEP);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
This works perfectly with some images but it doesn't work with others, for example when I try to imshow the image in the Classify_Image, right after being created from the data sent by C# application, I am faced with images like this :
Problematic example:
Fine example:
Your initial issue is to do with what is called stride or pitch of image buffers.
Basically for performance reasons pixel row values can be memory aligned, here we see that in your case it's causing the pixel rows to not align because the row size is not equal to the pixel row width.
The general case is:
resolution width * bit-depth (in bytes) * num of channels + padding
in your case the bitmap class state:
The stride is the width of a single row of pixels (a scan line),
rounded up to a four-byte boundary
So if we look at the problematic image, it has a resolution of 1414 pixel width, this is a 8-bit RGB bitmap so if we do the maths:
1414 * 1 * 3 (we have RGB so 3 channels) = 4242 bytes
So now divide by 4-bytes:
4242 / 4 = 1060.5
So we are left with 0.5 * 4 bytes = 2 bytes padding
So the stride is in fact 4244 bytes.
So this needs to be passed through so that the stride is correct.
Looking at what you're doing, I'd pass the file as memory to your openCV dll, this should be able to call imdecode which will sniff the file type, additionally you can pass the flag cv::IMREAD_GRAYSCALE which will load the image and convert the grayscale on the fly.

MatchTemplate image with image converted to BYTE pointer in OpenCV

I'm loading C++ library from my C# code dynamically. I want to find small image inside large one, converting large image to byte[] and small image read from physical path. When I call imdecode then large_img always returns 0 cols and rows.
C#
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate ImageParams GetImageParams(IntPtr dataPtr, int size, string path);
// ...
byte[] largeImgByteArr = this.BitmapToByteArray(bmp);
IntPtr dataPtr = Marshal.AllocHGlobal(largeImgByteArr.Length);
Marshal.Copy(dataPtr, largeImgByteArr, 0, largeImgByteArr.Length);
C++
ImageParams GetImageParams(BYTE* largeImgBuf, int bufLength, const char* smallImgPath)
{
Mat large_img_data(bufLength, 1, CV_32FC1, largeImgBuf);
Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat small_img = imread(smallImgPath, IMREAD_COLOR);
int result_cols = large_img.cols - small_img.cols + 1;
int result_rows = large_img.rows - small_img.rows + 1;
Mat result;
result.create(result_cols, result_rows, CV_32FC1);
matchTemplate(large_img, small_img, result, CV_TM_SQDIFF_NORMED);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
}
What I'm doing wrong here?
Note: I have checked that image path is correct and byte array not empty.
Edit 1
I changed my code a bit by providing the large image width and height, also got rid of imdecode and changed something like in this post.
ImageParams GetImageParams(BYTE* largeImgBuf, int height, int width, int bufLength, const char* smallImgPath)
{
// Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat large_img = Mat(height, width, CV_8UC3, largeImgBuf);
Mat small_img = imread(templPath, 1);
/// ...
}
Now it returns rows and columns but when call matchTemplate method it throws an exception:
Remember that Bitmap structure in C# uses data padding (stride value as multiplicity of 4) whrereas OpenCV may not. Try creating Mat object directly from byte array but adjust step (stride) value as it just assigns data without any ownership or reallocation.
EDIT
Here's an example how create OpenCV Mat object from Bitmap. Data from bitmap is not copied, but only assigned. PixelFormat and OpenCV mat type must have corresponding single element size in bytes.
cv::Mat ImageBridge::cvimage(System::Drawing::Bitmap^ bitmap){
if(bitmap != nullptr){
switch(bitmap->PixelFormat){
case System::Drawing::Imaging::PixelFormat::Format24bppRgb:
return bmp2mat(bitmap, CV_8UC3);
case System::Drawing::Imaging::PixelFormat::Format8bppIndexed:
return bmp2mat(bitmap, CV_8U);
default: return cv::Mat();
}
}
else
return cv::Mat();
}
cv::Mat ImageBridge::bmp2mat(System::Drawing::Bitmap^ bitmap, int image_type){
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
bitmap->PixelFormat);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
cv::Mat image(
cv::Size(bitmap->Width, bitmap->Height),
image_type,
bmpptr,
bitmap_data->Stride);
bitmap->UnlockBits(bitmap_data);
return image;
}
EDIT 2
Conversion in reverse - this time data from Mat image is copied as Bitmap allocates it's own memory.
System::Drawing::Bitmap^ ImageBridge::bitmap(cv::Mat& image){
if(!image.empty() && image.type() == CV_8UC3)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format24bppRgb);
else if(!image.empty() && image.type() == CV_8U)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format8bppIndexed);
else
return nullptr;
}
System::Drawing::Bitmap^ ImageBridge::mat2bmp(cv::Mat& image, System::Drawing::Imaging::PixelFormat pixel_format){
if(image.empty())
return nullptr;
System::Drawing::Bitmap^ bitmap = gcnew System::Drawing::Bitmap(image.cols, image.rows, pixel_format);
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
pixel_format);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
int line_length = (int)image.step;
int bmp_stride = bitmap_data->Stride;
assert(!image.isSubmatrix());
assert(bmp_stride >= 0);
for(int l = 0; l < image.rows; l++){
char* cvptr = (char*)image.ptr(l);
int bmp_line_index = l * bmp_stride;
for(int i = 0; i < line_length; ++i)
bmpptr[bmp_line_index + i] = cvptr[i];
}
bitmap->UnlockBits(bitmap_data);
return bitmap;
}
Or if you have Mat image with step as multiplicity of 4 you can use non-copy version.
System::Drawing::Bitmap^ ImageBridge::bitmap2(cv::Mat& image){
System::Drawing::Bitmap^ bitmap;
assert(!image.isSubmatrix());
if(!image.empty() && image.type() == CV_8UC3){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
3 * image.cols,
System::Drawing::Imaging::PixelFormat::Format24bppRgb,
System::IntPtr(image.data));
}
else if(!image.empty() && image.type() == CV_8U){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
image.cols,
System::Drawing::Imaging::PixelFormat::Format8bppIndexed,
System::IntPtr(image.data));
}
return bitmap;
}
According to the documentation
The function reads an image from the specified buffer in the memory. If the buffer is too short or contains invalid data, the empty matrix/image is returned.
Try checking the error code after the imdecode part
#include <errno.h>
cout << errno;

WinApi - Byte Array to Gray 8-bit Bitmap (+Performance)

I have a byte array that needs to be displayed on the desktop (or Form). I'm using WinApi for that and not sure how to set all pixels at once. The byte array is in my memory and needs to be displayed as quickly as possible (with just WinApi).
I'm using C# but simple pseudo-code would be ok for me:
// create bitmap
byte[] bytes = ...;// contains pixel data, 1 byte per pixel
HDC desktopDC = GetWindowDC(GetDesktopWindow());
HDC bitmapDC = CreateCompatibleDC(desktopDC);
HBITMAP bitmap = CreateCompatibleBitmap(bitmapDC, 320, 240);
DeleteObject(SelectObject(bitmapDC, bitmap));
BITMAPINFO info = new BITMAPINFO();
info.bmiColors = new tagRGBQUAD[256];
for (int i = 0; i < info.bmiColors.Length; i++)
{
info.bmiColors[i].rgbRed = (byte)i;
info.bmiColors[i].rgbGreen = (byte)i;
info.bmiColors[i].rgbBlue = (byte)i;
info.bmiColors[i].rgbReserved = 0;
}
info.bmiHeader = new BITMAPINFOHEADER();
info.bmiHeader.biSize = (uint) Marshal.SizeOf(info.bmiHeader);
info.bmiHeader.biWidth = 320;
info.bmiHeader.biHeight = 240;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biBitCount = 8;
info.bmiHeader.biCompression = BI_RGB;
info.bmiHeader.biSizeImage = 0;
info.bmiHeader.biClrUsed = 256;
info.bmiHeader.biClrImportant = 0;
// next line throws wrong parameter exception all the time
// SetDIBits(bitmapDC, bh, 0, 240, Marshal.UnsafeAddrOfPinnedArrayElement(info.bmiColors, 0), ref info, DIB_PAL_COLORS);
// how do i store all pixels into the bitmap at once ?
for (int i = 0; i < bytes.Length;i++)
SetPixel(bitmapDC, i % 320, i / 320, random(0x1000000));
// draw the bitmap
BitBlt(desktopDC, 0, 0, 320, 240, bitmapDC, 0, 0, SRCCOPY);
When I just try to set each pixel by itself with SetPixel() I see a monochrome image without gray colors only black and white. How can I correctly create a gray scale bitmap for displaying ? And how do I do that quick ?
Update:
The call ends up in an error outside of my program in WinApi. Can't catch exception:
public const int DIB_RGB_COLORS = 0;
public const int DIB_PAL_COLORS = 1;
[DllImport("gdi32.dll")]
public static extern int SetDIBits(IntPtr hdc, IntPtr hbmp, uint uStartScan, uint cScanLines, byte[] lpvBits, [In] ref BITMAPINFO lpbmi, uint fuColorUse);
// parameters as above
SetDIBits(bitmapDC, bitmap, 0, 240, bytes, ref info, DIB_RGB_COLORS);
Two of the SetDIBits parameters are wrong:
lpvBits - this is the image data but you're passing the palette data. You should be passing your bytes array.
lpBmi - this is OK - the BITMAPINFO structure contains both the BITMAPINFOHEADER and the palette so you don't need to pass the palette separately. My answer to your other question describes how to declare the structure.
fuColorUse - this describes the format of the palette. You are using an RGB palette so you should pass DIB_RGB_COLORS.

Categories

Resources