I have a byte array that needs to be displayed on the desktop (or Form). I'm using WinApi for that and not sure how to set all pixels at once. The byte array is in my memory and needs to be displayed as quickly as possible (with just WinApi).
I'm using C# but simple pseudo-code would be ok for me:
// create bitmap
byte[] bytes = ...;// contains pixel data, 1 byte per pixel
HDC desktopDC = GetWindowDC(GetDesktopWindow());
HDC bitmapDC = CreateCompatibleDC(desktopDC);
HBITMAP bitmap = CreateCompatibleBitmap(bitmapDC, 320, 240);
DeleteObject(SelectObject(bitmapDC, bitmap));
BITMAPINFO info = new BITMAPINFO();
info.bmiColors = new tagRGBQUAD[256];
for (int i = 0; i < info.bmiColors.Length; i++)
{
info.bmiColors[i].rgbRed = (byte)i;
info.bmiColors[i].rgbGreen = (byte)i;
info.bmiColors[i].rgbBlue = (byte)i;
info.bmiColors[i].rgbReserved = 0;
}
info.bmiHeader = new BITMAPINFOHEADER();
info.bmiHeader.biSize = (uint) Marshal.SizeOf(info.bmiHeader);
info.bmiHeader.biWidth = 320;
info.bmiHeader.biHeight = 240;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biBitCount = 8;
info.bmiHeader.biCompression = BI_RGB;
info.bmiHeader.biSizeImage = 0;
info.bmiHeader.biClrUsed = 256;
info.bmiHeader.biClrImportant = 0;
// next line throws wrong parameter exception all the time
// SetDIBits(bitmapDC, bh, 0, 240, Marshal.UnsafeAddrOfPinnedArrayElement(info.bmiColors, 0), ref info, DIB_PAL_COLORS);
// how do i store all pixels into the bitmap at once ?
for (int i = 0; i < bytes.Length;i++)
SetPixel(bitmapDC, i % 320, i / 320, random(0x1000000));
// draw the bitmap
BitBlt(desktopDC, 0, 0, 320, 240, bitmapDC, 0, 0, SRCCOPY);
When I just try to set each pixel by itself with SetPixel() I see a monochrome image without gray colors only black and white. How can I correctly create a gray scale bitmap for displaying ? And how do I do that quick ?
Update:
The call ends up in an error outside of my program in WinApi. Can't catch exception:
public const int DIB_RGB_COLORS = 0;
public const int DIB_PAL_COLORS = 1;
[DllImport("gdi32.dll")]
public static extern int SetDIBits(IntPtr hdc, IntPtr hbmp, uint uStartScan, uint cScanLines, byte[] lpvBits, [In] ref BITMAPINFO lpbmi, uint fuColorUse);
// parameters as above
SetDIBits(bitmapDC, bitmap, 0, 240, bytes, ref info, DIB_RGB_COLORS);
Two of the SetDIBits parameters are wrong:
lpvBits - this is the image data but you're passing the palette data. You should be passing your bytes array.
lpBmi - this is OK - the BITMAPINFO structure contains both the BITMAPINFOHEADER and the palette so you don't need to pass the palette separately. My answer to your other question describes how to declare the structure.
fuColorUse - this describes the format of the palette. You are using an RGB palette so you should pass DIB_RGB_COLORS.
Related
I need to change my screen capture code to get a pixel array instead of a Bitmap.
I change the code to this:
BitBlt > Image.FromHbitmap(pointer) > LockBits > pixel array
But, I'm checking if it's possible to cut some middle man, and have something like this:
BitBlt > Marshal.Copy > pixel array
Or even:
WinApi method that gets the screen region as a pixel array
So far, I tried to use this code, without success:
public static byte[] CaptureAsArray(Size size, int positionX, int positionY)
{
var hDesk = GetDesktopWindow();
var hSrce = GetWindowDC(hDesk);
var hDest = CreateCompatibleDC(hSrce);
var hBmp = CreateCompatibleBitmap(hSrce, (int)size.Width, (int)size.Height);
var hOldBmp = SelectObject(hDest, hBmp);
try
{
new System.Security.Permissions.UIPermission(System.Security.Permissions.UIPermissionWindow.AllWindows).Demand();
var b = BitBlt(hDest, 0, 0, (int)size.Width, (int)size.Height, hSrce, positionX, positionY, CopyPixelOperation.SourceCopy | CopyPixelOperation.CaptureBlt);
var length = 4 * (int)size.Width * (int)size.Height;
var bytes = new byte[length];
Marshal.Copy(hBmp, bytes, 0, length);
//return b ? Image.FromHbitmap(hBmp) : null;
return bytes;
}
finally
{
SelectObject(hDest, hOldBmp);
DeleteObject(hBmp);
DeleteDC(hDest);
ReleaseDC(hDesk, hSrce);
}
return null;
}
This code gives me an System.AccessViolationException while stepping on Marshal.Copy.
Is there any more efficient way of getting screen pixels as a byte array while using BitBlt or similar screen capture methods?
EDIT:
As found in here and as suggested by CodyGray, I should use
var b = Native.BitBlt(_compatibleDeviceContext, 0, 0, Width, Height, _windowDeviceContext, Left, Top, Native.CopyPixelOperation.SourceCopy | Native.CopyPixelOperation.CaptureBlt);
var bi = new Native.BITMAPINFOHEADER();
bi.biSize = (uint)Marshal.SizeOf(bi);
bi.biBitCount = 32;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
bi.biCompression = 0;
bi.biHeight = Height;
bi.biWidth = Width;
bi.biPlanes = 1;
var data = new byte[4 * Width * Height];
Native.GetDIBits(_windowDeviceContext, _compatibleBitmap, 0, (uint)Height, data, ref bi, Native.DIB_Color_Mode.DIB_RGB_COLORS);
My data array has all the pixels of the screenshot.
Now, I'm going to test if there's any performance improvements or not.
Yeah, you can't just start accessing the raw bits of a BITMAP object through an HBITMAP (as returned by CreateCompatibleBitmap). HBITMAP is just a handle, as the name suggests. It's not a pointer in the classic "C" sense that it points to the beginning of the array. Handles are like indirect pointers.
GetDIBits is the appropriate solution to get the raw, device-independent pixel array from a bitmap that you can iterate through. But you'll still need to use the code you have to get the screen bitmap in the first place. Essentially, you want something like this. Of course, you'll need to translate it into C#, but that shouldn't be difficult, since you already know how to call WinAPI functions.
Note that you do not need to call GetDesktopWindow or GetWindowDC. Just pass NULL as the handle to GetDC; it has the same effect of returning a screen DC, which you can then use to create a compatible bitmap. In general, you should almost never call GetDesktopWindow.
I'm loading C++ library from my C# code dynamically. I want to find small image inside large one, converting large image to byte[] and small image read from physical path. When I call imdecode then large_img always returns 0 cols and rows.
C#
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate ImageParams GetImageParams(IntPtr dataPtr, int size, string path);
// ...
byte[] largeImgByteArr = this.BitmapToByteArray(bmp);
IntPtr dataPtr = Marshal.AllocHGlobal(largeImgByteArr.Length);
Marshal.Copy(dataPtr, largeImgByteArr, 0, largeImgByteArr.Length);
C++
ImageParams GetImageParams(BYTE* largeImgBuf, int bufLength, const char* smallImgPath)
{
Mat large_img_data(bufLength, 1, CV_32FC1, largeImgBuf);
Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat small_img = imread(smallImgPath, IMREAD_COLOR);
int result_cols = large_img.cols - small_img.cols + 1;
int result_rows = large_img.rows - small_img.rows + 1;
Mat result;
result.create(result_cols, result_rows, CV_32FC1);
matchTemplate(large_img, small_img, result, CV_TM_SQDIFF_NORMED);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
}
What I'm doing wrong here?
Note: I have checked that image path is correct and byte array not empty.
Edit 1
I changed my code a bit by providing the large image width and height, also got rid of imdecode and changed something like in this post.
ImageParams GetImageParams(BYTE* largeImgBuf, int height, int width, int bufLength, const char* smallImgPath)
{
// Mat large_img = imdecode(large_img_data, IMREAD_COLOR);
Mat large_img = Mat(height, width, CV_8UC3, largeImgBuf);
Mat small_img = imread(templPath, 1);
/// ...
}
Now it returns rows and columns but when call matchTemplate method it throws an exception:
Remember that Bitmap structure in C# uses data padding (stride value as multiplicity of 4) whrereas OpenCV may not. Try creating Mat object directly from byte array but adjust step (stride) value as it just assigns data without any ownership or reallocation.
EDIT
Here's an example how create OpenCV Mat object from Bitmap. Data from bitmap is not copied, but only assigned. PixelFormat and OpenCV mat type must have corresponding single element size in bytes.
cv::Mat ImageBridge::cvimage(System::Drawing::Bitmap^ bitmap){
if(bitmap != nullptr){
switch(bitmap->PixelFormat){
case System::Drawing::Imaging::PixelFormat::Format24bppRgb:
return bmp2mat(bitmap, CV_8UC3);
case System::Drawing::Imaging::PixelFormat::Format8bppIndexed:
return bmp2mat(bitmap, CV_8U);
default: return cv::Mat();
}
}
else
return cv::Mat();
}
cv::Mat ImageBridge::bmp2mat(System::Drawing::Bitmap^ bitmap, int image_type){
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
bitmap->PixelFormat);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
cv::Mat image(
cv::Size(bitmap->Width, bitmap->Height),
image_type,
bmpptr,
bitmap_data->Stride);
bitmap->UnlockBits(bitmap_data);
return image;
}
EDIT 2
Conversion in reverse - this time data from Mat image is copied as Bitmap allocates it's own memory.
System::Drawing::Bitmap^ ImageBridge::bitmap(cv::Mat& image){
if(!image.empty() && image.type() == CV_8UC3)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format24bppRgb);
else if(!image.empty() && image.type() == CV_8U)
return mat2bmp(image, System::Drawing::Imaging::PixelFormat::Format8bppIndexed);
else
return nullptr;
}
System::Drawing::Bitmap^ ImageBridge::mat2bmp(cv::Mat& image, System::Drawing::Imaging::PixelFormat pixel_format){
if(image.empty())
return nullptr;
System::Drawing::Bitmap^ bitmap = gcnew System::Drawing::Bitmap(image.cols, image.rows, pixel_format);
auto bitmap_data = bitmap->LockBits(
System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height),
System::Drawing::Imaging::ImageLockMode::ReadWrite,
pixel_format);
char* bmpptr = (char*)bitmap_data->Scan0.ToPointer();
int line_length = (int)image.step;
int bmp_stride = bitmap_data->Stride;
assert(!image.isSubmatrix());
assert(bmp_stride >= 0);
for(int l = 0; l < image.rows; l++){
char* cvptr = (char*)image.ptr(l);
int bmp_line_index = l * bmp_stride;
for(int i = 0; i < line_length; ++i)
bmpptr[bmp_line_index + i] = cvptr[i];
}
bitmap->UnlockBits(bitmap_data);
return bitmap;
}
Or if you have Mat image with step as multiplicity of 4 you can use non-copy version.
System::Drawing::Bitmap^ ImageBridge::bitmap2(cv::Mat& image){
System::Drawing::Bitmap^ bitmap;
assert(!image.isSubmatrix());
if(!image.empty() && image.type() == CV_8UC3){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
3 * image.cols,
System::Drawing::Imaging::PixelFormat::Format24bppRgb,
System::IntPtr(image.data));
}
else if(!image.empty() && image.type() == CV_8U){
bitmap = gcnew System::Drawing::Bitmap(
image.cols, image.rows,
image.cols,
System::Drawing::Imaging::PixelFormat::Format8bppIndexed,
System::IntPtr(image.data));
}
return bitmap;
}
According to the documentation
The function reads an image from the specified buffer in the memory. If the buffer is too short or contains invalid data, the empty matrix/image is returned.
Try checking the error code after the imdecode part
#include <errno.h>
cout << errno;
I am trying to increase my image detection class using lockbits, yet this cause problems with the code and thus it does not run. How can i go about using lockbits and getpixel at the same time in order to speed up image detection, or can someone show me an alternative which is just as fast?
code:
static IntPtr Iptr = IntPtr.Zero;
static BitmapData bitmapData = null;
static public byte[] Pixels { get; set; }
static public int Depth { get; private set; }
static public int Width { get; private set; }
static public int Height { get; private set; }
static public void LockBits(Bitmap source)
{
// Get width and height of bitmap
Width = source.Width;
Height = source.Height;
// get total locked pixels count
int PixelCount = Width * Height;
// Create rectangle to lock
Rectangle rect = new Rectangle(0, 0, Width, Height);
// get source bitmap pixel format size
Depth = System.Drawing.Bitmap.GetPixelFormatSize(source.PixelFormat);
// Lock bitmap and return bitmap data
bitmapData = source.LockBits(rect, ImageLockMode.ReadWrite,
source.PixelFormat);
// create byte array to copy pixel values
int step = Depth / 8;
Pixels = new byte[PixelCount * step];
Iptr = bitmapData.Scan0;
// Copy data from pointer to array
Marshal.Copy(Iptr, Pixels, 0, Pixels.Length);
}
static public bool SimilarColors(int R1, int G1, int B1, int R2, int G2, int B2, int Tolerance)
{
bool returnValue = true;
if (Math.Abs(R1 - R2) > Tolerance || Math.Abs(G1 - G2) > Tolerance || Math.Abs(B1 - B2) > Tolerance)
{
returnValue = false;
}
return returnValue;
}
public bool findImage(Bitmap small, Bitmap large, out Point location)
{
unsafe
{
LockBits(small);
LockBits(large);
//Loop through large images width
for (int largeX = 0; largeX < large.Width; largeX++)
{
//And height
for (int largeY = 0; largeY < large.Height; largeY++)
{
//Loop through the small width
for (int smallX = 0; smallX < small.Width; smallX++)
{
//And height
for (int smallY = 0; smallY < small.Height; smallY++)
{
//Get current pixels for both image
Color currentSmall = small.GetPixel(smallX, smallY);
Color currentLarge = large.GetPixel(largeX + smallX, largeY + smallY);
//If they dont match (i.e. the image is not there)
if (!colorsMatch(currentSmall, currentLarge))
//Goto the next pixel in the large image
goto nextLoop;
}
}
//If all the pixels match up, then return true and change Point location to the top left co-ordinates where it was found
location = new Point(largeX, largeY);
return true;
//Go to next pixel on large image
nextLoop:
continue;
}
}
//Return false if image is not found, and set an empty point
location = Point.Empty;
return false;
}
}
You wouldn't want to rely on getPixel() for image processing; it's okay to make an occasional call to get a point value (e.g. on mouseover), but in general it's preferable to do image processing in image memory or in some 2D array that you can convert to a Bitmap when necessary.
To start, you might try writing a method that using LockBits/UnlockBits to extract an array that is convenient to manipulate. Once you're done manipulating the array, you can write it back to a bitmap using a different LockBits/UnlockBits function.
Here's some sample code I've used in the past. The first function returns a 1D array of values from a Bitmap. Since you know the bitmap's width, you can convert this 1D array to a 2D array for further processing. Once you're done processing, you can call the second function to convert the (modified) 1D array into a bitmap again.
public static byte[] Array1DFromBitmap(Bitmap bmp){
if (bmp == null) throw new NullReferenceException("Bitmap is null");
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData data = bmp.LockBits(rect, ImageLockMode.ReadWrite, bmp.PixelFormat);
IntPtr ptr = data.Scan0;
//declare an array to hold the bytes of the bitmap
int numBytes = data.Stride * bmp.Height;
byte[] bytes = new byte[numBytes];
//copy the RGB values into the array
System.Runtime.InteropServices.Marshal.Copy(ptr, bytes, 0, numBytes);
bmp.UnlockBits(data);
return bytes;
}
public static Bitmap BitmapFromArray1D(byte[] bytes, int width, int height)
{
Bitmap grayBmp = new Bitmap(width, height, PixelFormat.Format8bppIndexed);
Rectangle grayRect = new Rectangle(0, 0, grayBmp.Width, grayBmp.Height);
BitmapData grayData = grayBmp.LockBits(grayRect, ImageLockMode.ReadWrite, grayBmp.PixelFormat);
IntPtr grayPtr = grayData.Scan0;
int grayBytes = grayData.Stride * grayBmp.Height;
ColorPalette pal = grayBmp.Palette;
for (int g = 0; g < 256; g++){
pal.Entries[g] = Color.FromArgb(g, g, g);
}
grayBmp.Palette = pal;
System.Runtime.InteropServices.Marshal.Copy(bytes, 0, grayPtr, grayBytes);
grayBmp.UnlockBits(grayData);
return grayBmp;
}
These methods makes assumptions about the Bitmap pixel format that may not work for you, but I hope the general idea is clear: use LockBits/UnlockBits to extract an array of bytes from a Bitmap so that you can write and debug algorithms most easily, and then use LockBits/UnlockBits again to write the array to a Bitmap again.
For portability, I would recommend that your methods return the desired data types rather than manipulating global variables within the methods themselves.
If you've been using getPixel(), then converting to/from arrays as described above could speed up your code considerably for a small investment of coding effort.
Ok where to start. Better you understand what you are doing with lockBits.
First of all make sure, that you dont overwrite your byte array with.
LockBits(small);
LockBits(large);
due to the second call all the first call does is locking your image and that is not good since you doesn't unlock it again.
So add another byte array that represents the image.
You can do something like this
LockBits(small, true);
LockBits(large, false);
and change your Lockbits method
static public void LockBits(Bitmap source, bool flag)
{
...
Marshal.Copy(Iptr, Pixels, 0, Pixels.Length);
if(flag)
PixelsSmall=Pixels;
else
PixelsLarge=Pixels;
}
where PixelsLarge and PixelsSmall are globals and Pixels isn't
Those 2 contain your image. Now you have to compare it.
Now you have to compare each "set of bytes" therefore you have to know the Pixelformat.
Is it 32b/pix 24 or only 8 (ARGB,RGB,grayscale)
Let's take ARGB images. In this case a set would consist of 4 bytes (=32/8)
I am not sure about the order but I think the order of one set is ABGR or BGRA.
Hope this may help you. If you don't figure out how to compare the right pixels then ask again. Ah and dont forget to use the UnlockBits command.
Im currently trying to use writeablebitmap to take a IntPtr of a scan of images and turn each one into a Bitmap. Im wanting to use writeablebitmap because im having an issue with standard gdi
GDI+ System.Drawing.Bitmap gives error Parameter is not valid intermittently
There is a method on a WriteableBitmap that called WritePixels
http://msdn.microsoft.com/en-us/library/aa346817.aspx
Im not sure what I set for the buffer and the stride every example I find it shows the stride as 0 although that throws an error. When I set the stride to 5 the image appear black. I know this may not be the most efficient code but any help would be appreciated.
//create bitmap header
bmi = new BITMAPINFOHEADER();
//create initial rectangle
Int32Rect rect = new Int32Rect(0, 0, 0, 0);
//create duplicate intptr to use while in global lock
dibhand = dibhandp;
bmpptr = GlobalLock(dibhand);
//get the pixel sizes
pixptr = GetPixelInfo(bmpptr);
//create writeable bitmap
var wbitm = new WriteableBitmap(bmprect.Width, bmprect.Height, 96.0, 96.0, System.Windows.Media.PixelFormats.Bgr32, null);
//draw the image
wbitm.WritePixels(rect, dibhandp, 10, 0);
//convert the writeable bitmap to bitmap
var stream = new MemoryStream();
var encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(wbitm));
encoder.Save(stream);
byte[] buffer = stream.GetBuffer();
var bitmap = new System.Drawing.Bitmap(new MemoryStream(buffer));
GlobalUnlock(dibhand);
GlobalFree(dibhand);
GlobalFree(dibhandp);
GlobalFree(bmpptr);
dibhand = IntPtr.Zero;
return bitmap;
An efficient way to work on Bitmaps in C# is to pass temporarily in unsafe mode (I know I don't answer the question exactly but I think the OP did not manage to use Bitmap, so this could be a solution anyway). You just have to lock bits and you're done:
unsafe private void GaussianFilter()
{
// Working images
using (Bitmap newImage = new Bitmap(width, height))
{
// Lock bits for performance reason
BitmapData newImageData = newImage.LockBits(new Rectangle(0, 0, newImage.Width,
newImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte* pointer = (byte*)newImageData.Scan0;
int offset = newImageData.Stride - newImageData.Width * 4;
// Compute gaussian filter on temp image
for (int j = 0; j < InputData.Height - 1; ++j)
{
for (int 0 = 1; i < InputData.Width - 1; ++i)
{
// You browse 4 bytes per 4 bytes
// The 4 bytes are: B G R A
byte blue = pointer[0];
byte green = pointer[1];
byte red = pointer[2];
byte alpha = pointer[3];
// Your business here by setting pointer[i] = ...
// If you don't use alpha don't forget to set it to 255 else your whole image will be black !!
// Go to next pixels
pointer += 4;
}
// Go to next line: do not forget pixel at last and first column
pointer += offset;
}
// Unlock image
newImage.UnlockBits(newImageData);
newImage.Save("D:\temp\OCR_gray_gaussian.tif");
}
}
This is really much more efficient than SetPixel(i, j), you just have to be careful about pointer limits (and not forget to unlock data when you're done).
Now to answer your question about stride: the stride is the length in bytes of a line, it is a multiple of 4. In my exemple I use the format Format32bppArgb which uses 4 bytes per pixel (R, G, B and alpha), so newImageData.Stride and newImageData.Width * 4 are always the same. I use the offset in my loops only to show where it would be necessary.
But if you use another format, for instance Format24bppRgb which uses 3 bytes per pixel (R, G and B only), then there may be an offset between stride and width. For an image 10 * 10 pixels in this format, you will have a stride of 10 * 3 = 30, + 2 to reach nearest multiple of 4, i.e. 32.
I'm designing a Compact Framework Control that supports transparency and for the runtime version of the control everything works just find doing the platform invoke on this API:
[DllImport("coredll.dll")]
extern public static Int32 AlphaBlend(IntPtr hdcDest, Int32 xDest, Int32 yDest, Int32 cxDest, Int32 cyDest, IntPtr hdcSrc, Int32 xSrc, Int32 ySrc, Int32 cxSrc, Int32 cySrc, BlendFunction blendFunction);
Obviously a call to "coredll.dll" isn't going to work in the desktop design time experience and for now, when the painting happens, I'm simply detecting that the control is being designed and painting it without any transparency. I would like to be able to give a better design time experience and show that same transparency in the Visual Studio Designer.
I've tried making this platform call:
[DllImport("gdi32.dll", EntryPoint = "GdiAlphaBlend")]
public static extern bool AlphaBlendDesktop(IntPtr hdcDest, int nXOriginDest, int nYOriginDest,
int nWidthDest, int nHeightDest,
IntPtr hdcSrc, int nXOriginSrc, int nYOriginSrc, int nWidthSrc, int nHeightSrc,
BlendFunction blendFunction);
but while it returns true, the result is nothing at all is painted to the design time view.
Any thoughts?
This worked for me to fool the designer to perform AlphaBlend operations.
// PixelFormatIndexed = 0x00010000, // Indexes into a palette
// PixelFormatGDI = 0x00020000, // Is a GDI-supported format
// PixelFormatAlpha = 0x00040000, // Has an alpha component
// PixelFormatPAlpha = 0x00080000, // Pre-multiplied alpha
// PixelFormatExtended = 0x00100000, // Extended color 16 bits/channel
// PixelFormatCanonical = 0x00200000,
// PixelFormat32bppARGB = (10 | (32 << 8) | PixelFormatAlpha | PixelFormatGDI | PixelFormatCanonical),
// cheat the design time to create a bitmap with an alpha channel
using (Bitmap bmp = new Bitmap(image.Width, image.Height, (PixelFormat)
PixelFormat32bppARGB))
{
// copy the original image
using(Graphics g = Graphics.FromImage(bmp))
{
g.DrawImage(image,0,0);
}
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
(PixelFormat)PixelFormat32bppARGB);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
// This code is specific to a bitmap with 32 bits per pixels.
int bytes = bmp.Width * bmp.Height * 4;
byte[] argbValues = new byte[bytes];
// Copy the ARGB values into the array.
System.Runtime.InteropServices.Marshal.Copy(ptr, argbValues, 0, bytes);
// Set every alpha value to the given transparency.
for (int counter = 3; counter < argbValues.Length; counter += 4)
argbValues[counter] = transparency;
// Copy the ARGB values back to the bitmap
System.Runtime.InteropServices.Marshal.Copy(argbValues, 0, ptr, bytes);
// Unlock the bits.
bmp.UnlockBits(bmpData);
gx.DrawImage(bmp, x, y);
}
Sorry that this is not exactly answering your question...but you should check out the UI Framework for .Net Compact Framework, which does alpha blending already.
http://code.msdn.microsoft.com/uiframework