There is a post about converting OpenCV cv::Mat to Texture2D in Unity and I provided an answer which works well. Now, I am trying to do the opposite but have been stuck on this for few hours now.
I want to convert Unity's Texture2D to OpenCV cv::Mat so that I can process the Texture on the C++ side.
Here is the original Texture2D in my Unity project that I want to convert to cv:Mat:
Here is what it looks like after converting it into cv:Mat:
It looks so washed out. I am not worried about the rotation of the image. I can fix that. Just wondering why it looks so washed out. Also used cv::imwrite to save the image for testing purposes but the issue in also in the saved image.
C# code:
[DllImport("TextureConverter")]
private static extern float TextureToCVMat(IntPtr texData, int width, int height);
unsafe void TextureToCVMat(Texture2D texData)
{
Color32[] texDataColor = texData.GetPixels32();
//Pin Memory
fixed (Color32* p = texDataColor)
{
TextureToCVMat((IntPtr)p, texData.width, texData.height);
}
}
public Texture2D tex;
void Start()
{
TextureToCVMat(tex);
}
C++ code:
DLLExport void TextureToCVMat(unsigned char* texData, int width, int height)
{
Mat texture(height, width, CV_8UC4, texData);
cvNamedWindow("Unity Texture", CV_WINDOW_NORMAL);
//cvResizeWindow("Unity Texture", 200, 200);
cv::imshow("Unity Texture", texture);
cv::imwrite("Inno Image.jpg", texture);
}
I also tried creating a struct on the C++ side to hold the pixel information instead of using unsigned char* but the result is still the-same:
struct Color32
{
uchar r;
uchar g;
uchar b;
uchar a;
};
DLLExport void TextureToCVMat(Color32* texData, int width, int height)
{
Mat texture(height, width, CV_8UC4, texData);
cvNamedWindow("Unity Texture", CV_WINDOW_NORMAL);
cvResizeWindow("Unity Texture", 200, 200);
cv::imshow("Unity Texture", texture);
}
Why does the image look so so washed out and how do you fix this?
OpenCV creates images as BGR by default, whereas Color32 stores pixels as RGBA. However since OP mentioned in the comments that the Texture2D.format gives texture format as RGB24, we can ignore the alpha channel altogether.
DLLExport void TextureToCVMat(unsigned char* texData, int width, int height)
{
Mat texture(height, width, CV_8UC4, texData);
cv::cvtColor(texture,texture,cv::COLOR_BGRA2RGB);
cv::imshow("Unity Texture", texture);
cv::waitKey(0);
cv::destroyAllWindows();
}
Related
Meta Context:
I'm currently working on a game that utilizes opencv as a substitute for ordinary inputs (keyboard, mouse, etc...). I'm using Unity3D's C# scripts and opencv in C++ via DllImports. My goal is to create an image inside my game coming from opencv.
Code Context:
As done usually in OpenCV, I'm using Mat to represent my image. This is the way that I'm exporting the image bytes:
cv::Mat _currentFrame;
...
extern "C" byte * EXPORT GetRawImage()
{
return _currentFrame.data;
}
And this is how i'm importing from C#:
[DllImport ("ImageInputInterface")]
private static extern IntPtr GetRawImage ();
...
public static void GetRawImageBytes (ref byte[] result, int arrayLength) {
IntPtr a = GetRawImage ();
Marshal.Copy(a, result, 0, arrayLength);
FreeBuffer(a);
}
Judging by the way I understand OpenCV, I expect the byte array to be structured in this way when serialized in a uchar pointer:
b1, g1, r1, b2, g2, r2, ...
I'm converting this BGR array to a RGB array using:
public static void BGR2RGB(ref byte[] buffer) {
byte swap;
for (int i = 0; i < buffer.Length; i = i + 3) {
swap = buffer[i];
buffer[i] = buffer[i + 2];
buffer[i + 2] = swap;
}
}
Finally, I'm using Unity's LoadRawTextureData to load the bytes to a texture:
this.tex = new Texture2D(
ImageInputInterface.GetImageWidth(),
ImageInputInterface.GetImageHeight(),
TextureFormat.RGB24,
false
);
...
ImageInputInterface.GetRawImageBytes(ref ret, ret.Length);
ImageInputInterface.BGR2RGB(ref ret);
tex.LoadRawTextureData(ret);
tex.Apply();
Results:
The final image seems to be scattered in someway, it resembles some shapes, but it seems to triple the shapes as well. This is me holding my hand in front of the camera:
[Me, my hand and the camera]
Doing some tests, I concluded that I decoded the channels correctly, since, using my phone to emit RGB light, I can reproduce the colors from the real world:
[Red Test]
[Blue Test]
[Green Test]
There are also some strange lines in the image:
[Spooky Lines]
There is also my face to compare these images to:
[My face in front of the camera]
Questions:
Since I'm able to correctly decode the color channels, what have I assumed wrong in decoding the OpenCV array? It's that I don't know how the Unity's LoadRawTextureData works, or have I decoded something in the wrong way?
How is the OpenCV Mat.data array structured?
UPDATE
Thanks to #Programmer, his solution worked like magic.
[Me Happy]
I changed his script a little, there was no need to do some stuff. And in my case i needed to use BGR2RGBA, not RGB2RGBA:
extern "C" void EXPORT GetRawImage( byte *data, int width, int height )
{
cv::Mat resizedMat( height, width, _currentFrame.type() );
cv::resize( _currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC );
cv::Mat argbImg;
cv::cvtColor( resizedMat, argbImg, CV_BGR2RGBA );
std::memcpy( data, argbImg.data, argbImg.total() * argbImg.elemSize() );
}
Use SetPixels32 instead of LoadRawTextureData. Instead of returning the array data from C++, do that from C#. Create Color32 array and pin it in c# with GCHandle.Alloc, send the address of the pinned Color32 array to C++, use cv::resize to resize the cv::Mat to match the size of pixels sent from C#. You must do this step or expect some error or issues.
Finally, convert cv::Mat from RGB to ARGB then use std::memcpy to update the array from C++. The SetPixels32 function can then be used to load that updated Color32 array into Texture2D. This is how I do it and it has been working for me without any issues. There might be other better ways to do it but I have never found one.
C++:
cv::Mat _currentFrame;
void GetRawImageBytes(unsigned char* data, int width, int height)
{
//Resize Mat to match the array passed to it from C#
cv::Mat resizedMat(height, width, _currentFrame.type());
cv::resize(_currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC);
//You may not need this line. Depends on what you are doing
cv::imshow("Nicolas", resizedMat);
//Convert from RGB to ARGB
cv::Mat argb_img;
cv::cvtColor(resizedMat, argb_img, CV_RGB2BGRA);
std::vector<cv::Mat> bgra;
cv::split(argb_img, bgra);
std::swap(bgra[0], bgra[3]);
std::swap(bgra[1], bgra[2]);
std::memcpy(data, argb_img.data, argb_img.total() * argb_img.elemSize());
}
C#:
Attach to any GameObject with a Renderer and you should see the cv::Mat displayed and updated on that Object every frame. Code is commented if confused:
using System;
using System.Runtime.InteropServices;
using UnityEngine;
public class Test : MonoBehaviour
{
[DllImport("ImageInputInterface")]
private static extern void GetRawImageBytes(IntPtr data, int width, int height);
private Texture2D tex;
private Color32[] pixel32;
private GCHandle pixelHandle;
private IntPtr pixelPtr;
void Start()
{
InitTexture();
gameObject.GetComponent<Renderer>().material.mainTexture = tex;
}
void Update()
{
MatToTexture2D();
}
void InitTexture()
{
tex = new Texture2D(512, 512, TextureFormat.ARGB32, false);
pixel32 = tex.GetPixels32();
//Pin pixel32 array
pixelHandle = GCHandle.Alloc(pixel32, GCHandleType.Pinned);
//Get the pinned address
pixelPtr = pixelHandle.AddrOfPinnedObject();
}
void MatToTexture2D()
{
//Convert Mat to Texture2D
GetRawImageBytes(pixelPtr, tex.width, tex.height);
//Update the Texture2D with array updated in C++
tex.SetPixels32(pixel32);
tex.Apply();
}
void OnApplicationQuit()
{
//Free handle
pixelHandle.Free();
}
}
So I have some C++ opencv code that I'm calling from C# inside unity to process the input from the webcam.
I compile my C++ code to a DLL that I then import into C# with a DLLImport. I pass a pinned GCHandle reference to the C++ code so that I can manipulate the image array from C++. This all works. I can pass each frame to my C++ dll and have it grayscale it. It works wonderfully.
The problem arises when I try to do things other than just making the frame grayscale. I tried to do a simple blur() and the output comes out with a weird ghosted image to the left and right. I'm not sure what could be going wrong. It also happens when I do GaussianBlur() or Canny().
On the left is when I cover the camera, you can see the weird artifact more clearly. In the middle is the artifact itself after passing through GaussianBlur(). It seems like it creates copies of the image and overlays them with itself. And on the right is when it's just grayscaled to show that THAT works properly. So I figure it's not something that's happening between C# and C++, it's something that happens only when I pass the frame through opencv's blur or gaussianblur or canny.
Here is the C# code in unity
using UnityEngine;
using System.Collections;
using System;
using System.Runtime.InteropServices;
public class camera : MonoBehaviour {
[DllImport("tee")]
public static extern void bw([MarshalAs(UnmanagedType.LPStruct)]
IntPtr data,
int width,
int height);
WebCamTexture back;
Color32[] data;
Byte[] byteData;
Renderer rend;
String test;
Texture2D tex;
GCHandle dataHandle;
// Use this for initialization
void Start () {
back = new WebCamTexture();
back.Play();
rend = GetComponent<Renderer>();
tex = new Texture2D(back.width, back.height, TextureFormat.ARGB32, false);
data = back.GetPixels32();
dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned);
}
void OnDisable()
{
dataHandle.Free();
}
// Update is called once per frame
void Update () {
back.GetPixels32(data);
bw(dataHandle.AddrOfPinnedObject(), back.width, back.height);
tex.SetPixels32(data);
tex.Apply();
rend.material.mainTexture = tex;
}
}
and here is the C++ code that gets compiled into a DLL
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
using namespace std;
using namespace cv;
extern "C"
{
__declspec(dllexport) void bw(int data, int width, int height) {
unsigned char * buffer = reinterpret_cast<unsigned char *>(data);
Mat mat = Mat(width, height, CV_8UC4, buffer).clone();
Mat gray;
cvtColor(mat, gray, CV_RGBA2GRAY);
Mat blurred;
GaussianBlur(gray, blurred, Size(3, 3), 2, 2);
if (blurred.isContinuous()) {
for (int i = 0; i < (width * height); i++) {
unsigned char * pxl = buffer + 4 * i;
pxl[0] = blurred.data[i]; //red channel
pxl[1] = blurred.data[i]; //green channel
pxl[2] = blurred.data[i]; //blue channel
pxl[3] = (unsigned char)255; // alpha channel
}
}
}
}
According to OpenCV's documentation, the Mat constructor takes rows and cols as parameters, so you should switch the width and height parameters. See here http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat-mat
Another thing, do you know how the images are stored in C#? Do they have any kind of data alignment(i.e. The rows aren't continuous)? Because that could also be an issue when you create the containing Mat.
I'm from my phone currently, I'll try to reformat my answer ASAP
EDIT: thinking about it, the thing about switching width and height when constructing makes sense only if both OpenCV and texture2D store images in row major order. I've checked here (http://rbwhitaker.wikidot.com/extracting-texture-data) and it seems it's like that.
I think your problem is the way you are accessing blurred pixel value. You should access the channels values using the following instead
for (int i = 0; i < (width * height); i++) {
unsigned char * pxl = buffer + 4 * i;
pxl[0] = blurred.ptr<uchar>(i); //red channel
pxl[1] = blurred.ptr<uchar>(i); //green channel
pxl[2] = blurred.ptr<uchar>(i); //blue channel
pxl[3] = (unsigned char)255; // alpha channel
}
One other thing you can look into is the way opencv stores the pixel values versus pointer access of the data buffer (. You can test this easily by rotating the blurred image before accessing it and see if this gives you the correct output or creating Mat mat = Mat(height,width, CV_8UC4, buffer).clone(); instead.
And you are right about the blurred type, it should be one channel as the gray image.
Try the current code for another way of accessing the values in the blurred image
I've recently started looking into the topic of image processing. I figured one of the first things I should do is learn how images work. My latest project involves making a new copy of an image. I wanted to do it as fast as possible, so I tried to come up with as many approaches as I could. I wrote a method for each approach, then timed how long it took to call the method 100 times. These are my results:
Marshal: 0.45584
Instance: 1.69299
Clone: 0.30687
GetSet: 341.74056
Pointer: 2.54130
Graphics: 1.07960
Each method is passed a source image and destination image. The end goal is to copy all the pixels from the first image into the second image.
private void MarshalCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, sourceImage.Width, sourceImage.Height);
BitmapData readData = sourceImage.LockBits(rect, ImageLockMode.ReadOnly, sourceImage.PixelFormat);
BitmapData writeData = destinationImage.LockBits(rect, ImageLockMode.WriteOnly, sourceImage.PixelFormat);
// Get the address of the first line.
IntPtr sourcePtr = readData.Scan0;
IntPtr destinationPtr = writeData.Scan0;
byte[] rgbValues = new byte[readData.Stride * readData.Height];
Marshal.Copy(sourcePtr, rgbValues, 0, rgbValues.Length);
Marshal.Copy(rgbValues, 0, destinationPtr, rgbValues.Length);
sourceImage.UnlockBits(readData);
destinationImage.UnlockBits(writeData);
}
private void PointerCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, sourceImage.Width, sourceImage.Height);
BitmapData readData = sourceImage.LockBits(rect, ImageLockMode.ReadOnly, sourceImage.PixelFormat);
BitmapData writeData = destinationImage.LockBits(rect, ImageLockMode.WriteOnly, sourceImage.PixelFormat);
unsafe
{
// Get the address of the first line.
byte* readPointer = (byte*)readData.Scan0.ToPointer();
byte* writePointer = (byte*)writeData.Scan0.ToPointer();
int lengthOfData = readData.Stride * readData.Height;
for (int i = 0; i < lengthOfData; i++)
{
*writePointer++ = *readPointer++;
}
}
sourceImage.UnlockBits(readData);
destinationImage.UnlockBits(writeData);
}
private void InstanceCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
destinationImage = new Bitmap(sourceImage);
}
private void CloneRegionMethod(Bitmap sourceImage, Bitmap destinationImage)
{
destinationImage = sourceImage.Clone(new Rectangle(860, 440, 200, 200), sourceImage.PixelFormat);
}
private void CloneCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
destinationImage = (Bitmap)sourceImage.Clone();
}
private void GetSetPixelCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
for (int y = 0; y < sourceImage.Height; y++)
{
for (int x = 0; x < sourceImage.Width; x++)
{
destinationImage.SetPixel(x, y, destinationImage.GetPixel(x, y));
}
}
}
private void GraphicsCopyMethod(Bitmap sourceImage, Bitmap destinationImage)
{
using(Graphics g = Graphics.FromImage(destinationImage))
{
g.DrawImage(sourceImage, new Point(0, 0));
}
}
The following two lines are also added to the end of every method:
destinationImage.SetPixel(955, 535, Color.Red);
destinationImage.SetPixel(965, 545, Color.Green);
I did this because of something I read about Image.Clone(). It was something to the effect that a copy was not actually created until you modified a portion of the clone. Without setting these pixels, the Clone() approach seems to finish like 1000 times faster. I'm not quite sure what exactly is going on there.
The results seem to be about what I'd expect from what I've been reading online. However, the pointer approach is the slowest one I implemented outside the Get/Set Pixel methods. From my personal studies, I expected pointers to be one of the fastest, if not the fastest.
I've got a couple questions related to my project. Am I using pointers optimally for this situation? Why would the cloning approach be affected by changing a pixel in the clone image? Is there another approach that can copy an image in a shorter amount of time? Any other advice/tips? Thanks.
Numbers look reasonable. Summary:
GetPixel/SetPixel are slow
specially written code is faster
writing fast version of memcpy is very hard, beating library version is almost impossible in general case for any language (one can expect to get better performance in special cases like specific size/target CPU).
If you want to play more with pointers - try and measure:
- try the same code in regular C# (indexes)
- try to switch to int for copying
- notice that each row is DWORD aligned - no need to special case for tail.
- re-implement block copy from marshaling sample
Background is, I'm using XNA, and I render Awesomium to an Image, which I then make a Texture2D from.
The code to render Awesomium to an Image via a file looks something like this:
webView.Render().SaveToPNG("awesomium.png", true);
var image = Image.FromFile("awesomium.png", true);
Which works fine, but it's dog slow (as you can imagine).
Is there a way to use Awesomium to render to a System.Drawing.Image without writing out to the filesystem?
In the end I found my answer in awesomiumdotnet. I guess the official wrapper isn't always the most complete :/
public static class Rbex
{
public static Bitmap ToBitmap(this RenderBuffer buffer)
{
const int depth = 4;
const PixelFormat pf = PixelFormat.Format32bppArgb;
// Create bitmap
Bitmap bitmap = new Bitmap(buffer.GetWidth(), buffer.GetHeight(), pf);
BitmapData data = bitmap.LockBits(new Rectangle(0,0, buffer.GetWidth(), buffer.GetHeight()), ImageLockMode.WriteOnly, bitmap.PixelFormat);
buffer.CopyTo(data.Scan0, buffer.GetWidth() * depth, depth, false);
bitmap.UnlockBits(data);
return bitmap;
}
}
Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion.
There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works.
It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor:
Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight,
bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits);
As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap.
And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps.
I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency.
When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap.
The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly.
private void Example()
{
IntPtr nativeHBitmap = IntPtr.Zero;
/* Get the native HBitmap object from a Windows function here */
// Create the BITMAP structure and get info from our nativeHBitmap
NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP();
NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct);
// Create the managed bitmap using the pointer to the pixel data of the native HBitmap
Bitmap managedBitmap = new Bitmap(
bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits);
// Show the bitmap
this.BackgroundImage = managedBitmap;
/* Run the program, use the image */
MessageBox.Show("running...");
// When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap
this.BackgroundImage = null;
managedBitmap.Dispose();
NativeMethods.DeleteObject(nativeHBitmap);
}
internal static class NativeMethods
{
[StructLayout(LayoutKind.Sequential)]
public struct BITMAP
{
public int bmType;
public int bmWidth;
public int bmHeight;
public int bmWidthBytes;
public ushort bmPlanes;
public ushort bmBitsPixel;
public IntPtr bmBits;
}
[DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")]
public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject);
[DllImport("gdi32.dll")]
internal static extern bool DeleteObject(IntPtr hObject);
}
The following code worked for me even if the HBITMAP is an icon or bmp, it doesn't flip the image when it's an icon, and also works with bitmaps that don't contain Alpha channel:
private static Bitmap GetBitmapFromHBitmap(IntPtr nativeHBitmap)
{
Bitmap bmp = Bitmap.FromHbitmap(nativeHBitmap);
if (Bitmap.GetPixelFormatSize(bmp.PixelFormat) < 32)
return bmp;
BitmapData bmpData;
if (IsAlphaBitmap(bmp, out bmpData))
return GetlAlphaBitmapFromBitmapData(bmpData);
return bmp;
}
private static Bitmap GetlAlphaBitmapFromBitmapData(BitmapData bmpData)
{
return new Bitmap(
bmpData.Width,
bmpData.Height,
bmpData.Stride,
PixelFormat.Format32bppArgb,
bmpData.Scan0);
}
private static bool IsAlphaBitmap(Bitmap bmp, out BitmapData bmpData)
{
Rectangle bmBounds = new Rectangle(0, 0, bmp.Width, bmp.Height);
bmpData = bmp.LockBits(bmBounds, ImageLockMode.ReadOnly, bmp.PixelFormat);
try
{
for (int y = 0; y <= bmpData.Height - 1; y++)
{
for (int x = 0; x <= bmpData.Width - 1; x++)
{
Color pixelColor = Color.FromArgb(
Marshal.ReadInt32(bmpData.Scan0, (bmpData.Stride * y) + (4 * x)));
if (pixelColor.A > 0 & pixelColor.A < 255)
{
return true;
}
}
}
}
finally
{
bmp.UnlockBits(bmpData);
}
return false;
}
Right, no copy is made. Which is why the Remarks section of the MSDN Library says:
The caller is responsible for
allocating and freeing the block of
memory specified by the scan0
parameter, however, the memory should
not be released until the related
Bitmap is released.
This wouldn't be a problem if the pixel data was copied. Incidentally, this is normally a difficult problem to deal with. You can't tell when the client code called Dispose(), there's no way to intercept that call. Which makes it impossible to make such a bitmap behave like a replacement for Bitmap. The client code has to be aware that additional work is needed.
After reading the good points made by Hans Passant in his answer, I changed the method to immediately copy the pixel data into the managed bitmap, and free the native bitmap.
I'm creating two managed bitmap objects (but only one allocates memory for the actual pixel data), and use graphics.DrawImage to copy the image. Is there a better way to accomplish this? Or is this good/fast enough?
public static Bitmap CopyHBitmapToBitmap(IntPtr nativeHBitmap)
{
// Get width, height and the address of the pixel data for the native HBitmap
NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP();
NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct);
// Create a managed bitmap that has its pixel data pointing to the pixel data of the native HBitmap
// No memory is allocated for its pixel data
Bitmap managedBitmapPointer = new Bitmap(
bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits);
// Create a managed bitmap and allocate memory for pixel data
Bitmap managedBitmapReal = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, PixelFormat.Format32bppArgb);
// Copy the pixels of the native HBitmap into the canvas of the managed bitmap
Graphics graphics = Graphics.FromImage(managedBitmapReal);
graphics.DrawImage(managedBitmapPointer, 0, 0);
// Delete the native HBitmap object and free memory
NativeMethods.DeleteObject(nativeHBitmap);
// Return the managed bitmap, clone of the native HBitmap, with correct transparency
return managedBitmapReal;
}