Problem getting the image row where all pixels are white - c#

I am trying to find if the image is clipped from the bottom and if it is, then I will divide it in two images from the last white pixel row. Following are the simple methods I created to check clipping and get the empty white pixel rows. Also, as you can see this is not a very good solution. This might cause performance issues for larger images. So if anyone can suggest me better ways then it will be a great help:
private static bool IsImageBottomClipping(Bitmap image)
{
for (int i = 0; i < image.Width; i++)
{
var pixel = image.GetPixel(i, image.Height - 1);
if (pixel.ToArgb() != Color.White.ToArgb())
{
return true;
}
}
return false;
}
private static int GetLastWhiteLine(Bitmap image)
{
for (int i = image.Height - 1; i >= 0; i--)
{
int whitePixels = 0;
for (int j = 0; j < image.Width; j++)
{
var pixel = image.GetPixel(j, i);
if (pixel.ToArgb() == Color.White.ToArgb())
{
whitePixels = j + 1;
}
}
if (whitePixels == image.Width)
return i;
}
return -1;
}
IsImageBottomClipping is working fine. But other method is not sending correct white pixel row. It is only sending one less row. Example image:
In this case, row around 180 should be the return value of GetLastWhiteLine method. But it is returning 192.

All right, so... we got two of subjects to tackle here. First, the optimising, then, your bug. I'll start with the optimising.
The fastest way is to work in memory directly, but, honestly, it's kind of unwieldy. The second-best choice, which is what I generally use, is to copy the raw image data bytes out of the image object. This will make you end up with four vital pieces of data:
The width, which you can just get from the image.
The height, which you can just get from the image.
The byte array, containing the image bytes.
The stride, which gives you the amount of bytes used for each line on the image.
(Technically, there's a fifth one, namely the pixel format, but we'll just force things to 32bpp here so we don't have to take that into account along the way.)
Note that the stride, technically, is not just the amount of bytes used per pixel multiplied by the image width. It is rounded up to the next multiple of 4 bytes. When working with 32-bit ARGB content, this isn't really an issue, since 32-bit is 4 bytes, but in general, it's better to use the stride and not just the multiplied width, and write all code assuming there could be padded bytes behind each line. You'll thank me if you're ever processing 24-bit RGB content with this kind of system.
However, when going over the image's content you obviously should only check the exact range that contains pixel data, and not the full stride.
The way to get these things is quite simple: use LockBits on the image, tell it to expose the image as 32 bit per pixel ARGB data (it will actually convert it if needed), get the line stride, and use Marshal.Copy to copy the entire image contents into a byte array.
Int32 width = image.Width;
Int32 height = image.Height;
BitmapData sourceData = image.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Int32 stride = sourceData.Stride;
Byte[] data = new Byte[stride * height];
Marshal.Copy(sourceData.Scan0, data, 0, data.Length);
image.UnlockBits(sourceData);
As mentioned, this is forced to 32-bit ARGB format. If you would want to use this system to get the data out in the original format it has inside the image, just change PixelFormat.Format32bppArgb to image.PixelFormat.
Now, you have to realise, LockBits is a rather heavy operation, which copies the data out, in the requested pixel format, to new memory, where it can be read or (if not specified as read-only as I did here) edited. What makes this more optimal than your method is, quite simply, that GetPixel performs a LockBits operation every time you request a single pixel value. So you're cutting down the amount of LockBits calls from several thousands to just one.
Anyway, now, as for your functions.
The first method is, in my opinion, completely unnecessary; you should just run the second one on any image you get. Its output gives you the last white line of the image, so if that value equals height-1 you're done, and if it doesn't, you immediately have the value needed for the further processing. The first function does exactly the same as the second, after all; it checks if all pixels on a line are white. The only difference is that it only processes the last line.
So, onto the second method. This is where things go wrong. You set the amount of white pixels to the "current pixel index plus one", rather than incrementing it to check if all pixels matched, meaning the method goes over all pixels but only really checks if the last pixel on the row was white. Since your image indeed has a white pixel at the end of the last row, it aborts after one row.
Also, whenever you find a pixel that does not match, you should just abort the scan of that line immediately, like your first method does; there's no point in continuing on that line after that.
So, let's fix that second function, and rewrite it to work with that set of "byte array", "stride", "width" and "height", rather than an image. I added the "white" colour as parameter too, to make it more reusable, so it's changed from GetLastWhiteLine to GetLastClearLine.
One general usability note: if you are iterating over the height and width, do actually call your loop variables y and x; it makes things a lot more clear in your code.
I explained the used systems in the code comments.
private static Int32 GetLastClearLine(Byte[] sourceData, Int32 stride, Int32 width, Int32 height, Color checkColor)
{
// Get color as UInt32 in advance.
UInt32 checkColVal = (UInt32)checkColor.ToArgb();
// Use MemoryStream with BinaryReader since it can read UInt32 from a byte array directly.
using (MemoryStream ms = new MemoryStream(sourceData))
using (BinaryReader sr = new BinaryReader(ms))
{
for (Int32 y = height - 1; y >= 0; --y)
{
// Set position in the memory stream to the start of the current row.
ms.Position = stride * y;
Int32 matchingPixels = 0;
// Read UInt32 pixels for the whole row length.
for (Int32 x = 0; x < width; ++x)
{
// Read a UInt32 for one whole 32bpp ARGB pixel.
UInt32 colorVal = sr.ReadUInt32();
// Compare with check value.
if (colorVal == checkColVal)
matchingPixels++;
else
break;
}
// Test if full line matched the given color.
if (matchingPixels == width)
return y;
}
}
return -1;
}
This can be simplified, though; the loop variable x already contains the value you need, so if you simply declare it before the loop, you can check after the loop what value it had when the loop stopped, and there is no need to increment a second variable. And, honestly, the value read from the stream can be compared directly, without the colorVal variable. Making the contents of the y-loop:
{
ms.Position = stride * y;
Int32 x;
for (x = 0; x < width; ++x)
if (sr.ReadUInt32() != checkColVal)
break;
if (x == width)
return y;
}
For your example image, this gets me value 178, which is correct when I check in Gimp.

Related

Convert 12-bit Monochrome Image to 8-bit Grayscale

I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.
Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.

Arrays juggling - data preparation

I could use some advice or help regarding a problem i'm facing and can't figure out a decent algorithm or solution.
Here's the status:
I have a 2D array of shorts: short[][] signal one dimension is the so-called channels. So depending on the problem i may have from 1 to 128. My test cases have 60 channels. And the other dimension represents "time" - 2048 samples per second and the samples are about 30 seconds long.
So around this: short[60][61440] array is my source.
My destination is to put this data into a byte array, representing image data. I'm using 16bpp so direct conversion to bytes will be done. This is still not the problematic part.
The problem is i wish to get over with is that i wish to present the data like this: The 60 channels are converted into 12x5 matrixes and 256 samples (1/8 of a second) will be displayed next to each other. Then a next "row" of those and the next... Lemme put up a sketch:
So if i have a sample of exactly 30 seconds, i get an image of dimensions 3072 x 1200 px. (So 256 samples per 12px on width and 240(30x8) samples per 5 px on heighth)
Can someone provide a clue on how to get this over with?
I've been thinking about a few versions, but none that would really work.
I hope my problem can be understood.
Ok so adding info:
bitmap = new Bitmap(60, length, PixelFormat.Format16bppRgb565);
BitmapData bmpdata = bitmap.LockBits(new Rectangle(0, 0, 60, length), ImageLockMode.ReadWrite, PixelFormat.Format16bppRgb565);
for (int i = 0; i < length; i++)
{
for (int j = 0; j < channel_number; j++)
{
bufferList.AddRange(BitConverter.GetBytes(signalArray[j][i]));
}
}
where the bufferList is defined as List<byte> bufferList = new List<byte>();
This is an example where i make an image 60 x 61440 px (channels x time)
What i will do with this image is not the main point. The point is i wish to prepare data to get the desired image as described...

Why does a bitmap compare not equal to itself?

This is a bit puzzling here. The following code is part of a little testing application to verify that code changes didn't introduce a regression. To make it fast we used memcmp which appears to be the fastest way of comparing two images of equal size (unsurprisingly).
However, we have a few test images that exhibit a rather surprising problem: memcmp on the bitmap data tells us that they are not equal, however, a pixel-by-pixel comparison doesn't find any difference at all. I was under the impression that when using LockBits on a Bitmap you get the actual raw bytes of the image. For a 24 bpp bitmap it's a bit hard to imagine a condition where the pixels are the same but the underlying pixel data isn't.
A few surprising things:
The differences are always single bytes that are 00 in one image and FF in the other.
If one changes the PixelFormat for LockBits to Format32bppRgb or Format32bppArgb, the comparison succeeds.
If one passes the BitmapData returned by the first LockBits call as 4th argument to the second one, the comparison succeeds.
As noted above, the pixel-by-pixel comparison succeeds as well.
I'm a bit stumped here because frankly I cannot imagine why this happens.
(Reduced) Code below. Just compile with csc /unsafe and pass a 24bpp PNG image as first argument.
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;
class Program
{
public static void Main(string[] args)
{
Bitmap title = new Bitmap(args[0]);
Console.WriteLine(CompareImageResult(title, new Bitmap(title)));
}
private static string CompareImageResult(Bitmap bmp, Bitmap expected)
{
string retval = "";
unsafe
{
var rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
var resultData = bmp.LockBits(rect, ImageLockMode.ReadOnly, bmp.PixelFormat);
var expectedData = expected.LockBits(rect, ImageLockMode.ReadOnly, expected.PixelFormat);
try
{
if (memcmp(resultData.Scan0, expectedData.Scan0, resultData.Stride * resultData.Height) != 0)
retval += "Bitmap data did not match\n";
}
finally
{
bmp.UnlockBits(resultData);
expected.UnlockBits(expectedData);
}
}
for (var x = 0; x < bmp.Width; x++)
for (var y = 0; y < bmp.Height; y++)
if (bmp.GetPixel(x, y) != expected.GetPixel(x, y))
{
Console.WriteLine("Pixel diff at {0}, {1}: {2} - {3}", x, y, bmp.GetPixel(x, y), expected.GetPixel(x, y));
retval += "pixel fail";
}
return retval != "" ? retval : "success";
}
[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl)]
static extern int memcmp(IntPtr b1, IntPtr b2, long count);
}
Take a look at this, which pictorially illustrates a LockBits buffer - it shows the Rows of Strides and where Padding can appear at the end of the Stride (if it's needed).
https://web.archive.org/web/20141229164101/http://bobpowell.net/lockingbits.aspx
http://supercomputingblog.com/graphics/using-lockbits-in-gdi/
A stride is probably aligned to the 32bit (i.e. word) boundary (for efficiency purposes)...and the extra unused space at the end of the stride is to make the next Stride be aligned.
So that's what's giving you the random behaviour during the comparison...spurious data in the Padding region.
When you are using Format32bppRgb and Format32bppArgb that's naturally word aligned, so I guess you don't have any extra unused bits on the end, which is why it works.
Just an educated guess:
24 bits (3 bytes) is a little bit awkward on 32/64 bit hardware.
With this format there are bound to be buffers that are flushed out to a multiple of 4 bytes, leaving 1 or more bytes as 'don't care' . They can contain random data and the software doesn't feel obliged to zero them out. This will make memcmp fail.

I have a PGM format image with 8bpp 1024 X 1024 size and need to calculate GLCM (Grey Level Co-occurrence Matrix) from it

The image is of big size and I used getPixel and and setPixel methods to access bits but found out that it was way too slow so I went to implement lock and unlock bits but could not get my head around it. I also went through tutorials of Bob Powell but the tutorials but could not understand. So, I am asking for some help here to get GLCM from the image.
GLCM is generally a very computationally intensive algorithm. It iterates through each pixel, for each neighbor. Even C++ image processing libraries have this issue.
GLCM does however lend itself quite nicely to parallel (multi-threaded) implementations as the calculations for each reference pixel are independent.
With regards to using lock and unlock bits see the example code below. One thing to keep in mind is that the image can be padded for optimization reasons. Also, if your image has a different bit depth or multiple channels you will need to adjust the code accordingly.
BitmapData data = image.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.ReadOnly, PixelFormat.Gray8);
byte* dataPtr = (byte*)data.Scan0;
int rowPadding = data.Stride - (image.Width);
// iterate over height (rows)
for (int i = 0; i < height; i++)
{
// iterate over width (columns)
for (int j = 0; j < width; j++)
{
// pixel value
int value = dataPtr[0];
// advance to next pixel
dataPtr++;
// at the end of each column, skip extra padding
if (rowPadding > 0)
{
dataPtr += rowPadding;
}
}
image.UnlockBits(data1);

How can I do a color or numeric replacement with bitwise/boolean logic

How can I do a color replace like the code below without using the if statement and instead use boolean algebra (or some other magic that will not introduce conditional logic)
The Problem (excuse the code):
private Image ReplaceRectangleColors(Bitmap b,
Rectangle rect,
Color oldColor,
Color newColor)
{
BitmapData bmData = b.LockBits(rect,
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
IntPtr Scan0 = bmData.Scan0;
byte red = 0;
byte blue = 0;
byte green = 0;
unsafe
{
byte * p = (byte *)(void *)Scan0;
int nOffset = stride - rect.Width *3;
for(int y=0; y < rect.Height; ++y)
{
for(int x=0; x < rect.Width; ++x )
{
red = p[0];
blue = p[1];
green = p[2];
if (red == oldColor.R
&& blue == oldColor.B
&& green == oldColor.G)
{
p[0] = newColor.R;
p[1] = newColor.B;
p[2] = newColor.G;
}
p += 3;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return (Image)b;
}
The problem I have is that if the image is huge this code gets executed many times and has poor formance. I know there has to be a way to substitute the color replacement with something much cleaner/faster. Any ideas?
Just to summarize and simplify, I want to turn
if (red == oldColor.R
&& blue == oldColor.B
&& green == oldColor.G)
{
red = newColor.R;
blue = newColor.B;
green = newColor.G;
}
into a bit operation that doesn't include an if statement.
There aren't any bitwise operations that will replace pixels of one colour with another for you. In fact, reading a pixel, applying a bitwise operation and writing back the results for every pixel will probably work out slower than reading a pixel and only doing any work on it and writing it back if it matches your target colour.
However, there are some things that can be done to speed up the code, with increasing levels of complexity:
1) The first thing you could do is not to read the 3 bytes before you do the compare. If you read each byte only as it is needed for the comparison, then in the case that the red byte doesn't match, there isn't any need to read or compare the Green/Blue bytes. (The optimiser may well work this out on your behalf though)
2) Use cache coherence by accessing the data in the address-order that it is stored in. (You're doing this by working on the scanlines by putting x in your inner loop).
3) Use multithreading. Break the image into (e.g.) 4 strips, and process them in parallel, and you should be able to get a "several times" speedup if you have a 4+ core processor.
4) You may be able to work several times faster by using a 32-bit or 64-bit value instead of four or eight 8-bit values. This is because fetching one byte from memory might take a similar time (give or take some cache coherence etc) to fetching an entire CPU register (4 or 8 bytes). Once you have the value in a register, you can do a single comparison (RGBA) rather than four (R, G, B, A bytes separately), and then a single write back - potentially as much as 4x faster. This is the easy case (for 32-bpp images), as they conveniently fit one-pixel-per-int, so you can use a 32-bit integer to read/compare/write an entire RGBA pixel in a single operation.
But for other image depths you will have a much harder case, as the number of bytes in each pixel will not exactly match the size of your 32-bit int. For example, for 24bpp images, you will need to read three 32-bit dwords (12 bytes) so that you can then process four pixels (3 bytes x 4 = 12) on each iteration of your loop. You will need to use bitwise operations to peel apart these 3 ints and compare them to your 'oldcolour' (see below). An added complication is that you must be careful not to run off the end of each scanline if you are processing it in 4-pixel jumps. A similar process applies to using 64-bit longs, or processing lower bpp images - but you will have to start doing more intricate bit-wise operations to pull the data out cleanly, and it can get pretty complicated.
So how do you compare the pixels?
The first pixel is easy.
int oldColour = 0x00112233; // e.g. R=33, G=22, B=11
int newColour = 0x00445566;
int chunk1 = scanline[i]; // Treating scanline as an array of int, read 3 ints (12 bytes)
int chunk2 = scanline[i+1]; // We cache them in ints as we will read/write several times
int chunk3 = scanline[i+2];
if (chunk1 & 0x00ffffff == oldColour) // read and check 3 bytes of pixel
chunk2 = (chunk2 & 0xff000000) | newColour; // Write back 3 bytes of pixel
The next pixel has one byte in the first int, and 2 bytes in the next int:
if ((chunk1 >> 24) == (oldColour & 0xff)) // Does B byte match?
{
if ((chunk2 & 0x0000ffff) == (oldColour >> 8))
{
chunk1 = (chunk1 & 0x00ffffff) | (newColour & 0xff); // Replace B byte in chunk1
chunk2 = (chunk2 & 0xffff0000) | (newColour >> 8); // Replace G, B bytes in chunk2
}
}
Then the third pixel has 2 bytes (RG) in chunk2 and 1 byte (B) in chunk3:
if ((chunk2 >> 16) == (oldColour & 0xffff))
{
if ((chunk3 & 0xff) == (oldColour >> 16))
{
chunk2 = (chunk2 & 0x0000ffff) | (newColour << 16); // Replace RG bytes in chunk2
chunk3 = (chunk3 & 0xffffff00) | (newColour >> 16); // Replace B byte in chunk3
}
}
And finally, the last 3 bytes in chunk3 are the last pixel
if ((chunk3 >> 8) == oldCOlour)
chunk3 = (chunk3 & 0x000000ff) | (newColour << 8);
... and then write back the chunks to the scanline buffer.
That's the gist of it (and my masking/combining above may have some bugs, as I wrote the example code quickly and may have mixed up some of the pixels!).
Of course, once it works, you can then optimise it a load more - for example, whenever I compare stuff to parts of the oldColour (e.g. oldColour >> 16), I can precaclulate that constant outside the entire processing loop, and just use an "oldColourShiftedRight16" variable to avoid recalculating it on every pass through the loop. THe same goes for all the bits of newColour that are used. Potentially you may be able to make some gains by avoiding writing back the values that haven't been touched, too, as many of your pixels probably won't match the one you want to change.
So that should give you some idea of what you were asking for. It's not particularly simple, but it's a great deal of fun :-)
When you've got it all written and super-optimised, then the final step is to throw it away and just use your graphics card to do the whole thing a bazillion times faster in hardware - but let's face it, where's the fun in that? :-)
I wrote a project recently where I did color manipulation on a pixel per pixel basis. It had to run fast as it would update while you moved a mouse cursor around.
I started with unsafe code but I don't like unsafe code and so changed to safe territory and when I did, I had the speed issues you had but the resolution wasn't changing conditional logic. It was designing better algorithms for the pixel manipulation.
I'll give you an overview of what I did and I'm hoping it can get you where you want to be because it's really close.
First: I had multiple possible input pixel formats. Due to that I couldn't assume the RGB bytes were at specific offsets or even a static width. As such, I read the info from the passed in image and return a "color" that represents the sizes of each field:
private System.Drawing.Color GetOffsets(System.Drawing.Imaging.PixelFormat PixelFormat)
{
//Alpha contains bytes per color,
// R contains R offset in bytes
// G contains G offset in bytes
// B contains B offset in bytes
switch(PixelFormat)
{
case System.Drawing.Imaging.PixelFormat.Format24bppRgb:
return System.Drawing.Color.FromArgb(3, 0, 1, 2);
case System.Drawing.Imaging.PixelFormat.Format32bppArgb:
case System.Drawing.Imaging.PixelFormat.Format32bppPArgb:
return System.Drawing.Color.FromArgb(4, 1, 2, 3);
case System.Drawing.Imaging.PixelFormat.Format32bppRgb:
return System.Drawing.Color.FromArgb(4, 0, 1, 2);
case System.Drawing.Imaging.PixelFormat.Format8bppIndexed:
return System.Drawing.Color.White;
default:
return System.Drawing.Color.White;
}
}
For example purposes, let's say that a 24-bit RGB image is the source. I didn't want to change alpha values as I'm going to blend a color in to it.
Thus, R is at offset 0, B is at offset 1 and G at offset 2 and each pixel is three bits wide. This I create a temporary Color with this data.
Next, since this is in a custom control, I didn't want flickering so I overrode the OnPaintBackground and turned it off:
protected override void OnPaintBackground(System.Windows.Forms.PaintEventArgs pevent)
{
//base.OnPaintBackground(pevent);
}
Finally, and here's the part that gets to the crux of what you're doing, I draw a new image on each OnPaint (which is triggered as a mouse moves because I "Invalidate" it in the mouse move event handler)
Full code - before I call certain sections out ...
protected override void OnPaint(System.Windows.Forms.PaintEventArgs pe)
{
base.OnPaint(pe);
pe.Graphics.FillRectangle(new System.Drawing.SolidBrush(this.BackColor), pe.ClipRectangle);
System.Drawing.Rectangle DestinationRect = GetDestinationRectangle(pe.ClipRectangle);
if(DestinationRect != System.Drawing.Rectangle.Empty)
{
System.Drawing.Image BlendedImage = (System.Drawing.Image) this.Image.Clone();
if(HighlightRegion != System.Drawing.Rectangle.Empty && this.Image != null)
{
System.Drawing.Rectangle OffsetHighlightRegion =
new System.Drawing.Rectangle(
new System.Drawing.Point(
Math.Min(Math.Max(HighlightRegion.X + OffsetX, 0), BlendedImage.Width - HighlightRegion.Width -1),
Math.Min(Math.Max(HighlightRegion.Y + OffsetY, 0), BlendedImage.Height - HighlightRegion.Height -1)
)
, HighlightRegion.Size
);
System.Drawing.Bitmap BlendedBitmap = (System.Drawing.Bitmap) BlendedImage;
System.Drawing.Color OffsetRGB = GetOffsets(BlendedImage.PixelFormat);
byte BlendR = SelectionColor.R;
byte BlendG = SelectionColor.G;
byte BlendB = SelectionColor.B;
byte BlendBorderR = SelectionBorderColor.R;
byte BlendBorderG = SelectionBorderColor.G;
byte BlendBorderB = SelectionBorderColor.B;
if(OffsetRGB != System.Drawing.Color.White) //White means not supported
{
int BitWidth = OffsetRGB.G - OffsetRGB.R;
System.Drawing.Imaging.BitmapData BlendedData = BlendedBitmap.LockBits(new System.Drawing.Rectangle(0, 0, BlendedBitmap.Width, BlendedBitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, BlendedBitmap.PixelFormat);
int StrideWidth = BlendedData.Stride;
int BytesPerColor = OffsetRGB.A;
int ROffset = BytesPerColor - (OffsetRGB.R + 1);
int GOffset = BytesPerColor - (OffsetRGB.G + 1);
int BOffset = BytesPerColor - (OffsetRGB.B + 1);
byte[] BlendedBytes = new byte[Math.Abs(StrideWidth) * BlendedData.Height];
System.Runtime.InteropServices.Marshal.Copy(BlendedData.Scan0, BlendedBytes, 0, BlendedBytes.Length);
//Create Highlighted Region
for(int Row = OffsetHighlightRegion.Top ; Row <= OffsetHighlightRegion.Bottom ; Row++)
{
for(int Column = OffsetHighlightRegion.Left ; Column <= OffsetHighlightRegion.Right ; Column++)
{
int Offset = Row * StrideWidth + Column * BytesPerColor;
if(Row == OffsetHighlightRegion.Top || Row == OffsetHighlightRegion.Bottom || Column == OffsetHighlightRegion.Left || Column == OffsetHighlightRegion.Right)
{
BlendedBytes[Offset + ROffset] = BlendBorderR;
BlendedBytes[Offset + GOffset] = BlendBorderG;
BlendedBytes[Offset + BOffset] = BlendBorderB;
}
else
{
BlendedBytes[Offset + ROffset] = (byte) ((BlendedBytes[Offset + ROffset] + BlendR) >> 1);
BlendedBytes[Offset + GOffset] = (byte) ((BlendedBytes[Offset + GOffset] + BlendG) >> 1);
BlendedBytes[Offset + BOffset] = (byte) ((BlendedBytes[Offset + BOffset] + BlendB) >> 1);
}
}
}
System.Runtime.InteropServices.Marshal.Copy(BlendedBytes, 0, BlendedData.Scan0, BlendedBytes.Length);
BlendedBitmap.UnlockBits(BlendedData);
//base.Image = (System.Drawing.Image) BlendedBitmap;
}
}
pe.Graphics.DrawImage(BlendedImage, 0, 0, DestinationRect, System.Drawing.GraphicsUnit.Pixel);
}
}
Going through the code here are some explanations...
System.Drawing.Image BlendedImage = (System.Drawing.Image) this.Image.Clone();
It is important to draw to an offscreen image - this creates one such image. Otherwise, the drawing will be much slower.
if(HighlightRegion != System.Drawing.Rectangle.Empty && this.Image != null)
HighlightRegion is a RECT that holds the area to "mark off" on the source image. I have used this to mark off image regions of 4 Million pixels and it still runs fast enough to be "real time"
Some code below is used because a user might be scrolled over or down on the image so I modify my destination by their scrolling amount.
Below that, I cast the IMAGE to a BITMAP and get the before-mentioned Color info which I'll need to start using now. Depending on what you're doing you might want to cache that instead of getting it each time.
System.Drawing.Bitmap BlendedBitmap = (System.Drawing.Bitmap) BlendedImage;
On my control, I exposed two Color properties - SelectionColor and SelectionBorderColor - so that my regions still have a nice border with them. Part of my speed optimization was to pre-cast these to bytes as I'll be doing bitwise operations in a moment.
You'll see a comment in the code "White not supported" - in this case, the "White" is the "Fake Color" we use to store our bit widths. I used "White" to mean "I can't operate on this data"
The next line establishes that indeed each color is one bit because they might not be depending on our target color format by subtracting the R and G offset. Note that if you cannot garauntee that your G follows your R then you'll need to use something else. In my case, it was garaunteed.
Now where the part you're really looking for starts. I use a LockBits to get the bit data. After that, I use the data to finish setting up some pre-loop variables.
And then, I copy the data to a byte array. I'm going to loop through this byte array, change the values and then copy it's data back to the BITMAP. I was working on the BITMAP directly before thinking that since it's offscreen it would be just as fast as working with a native array.
I was wrong. Performance profiling proved it to me. It's faster to copy everything to a byte array and work within that.
Now the loop starts. It goes row by row, column by column. Offset is a number telling us where in the byte array we are in terms of "current pixel".
Then, I blend 50% or I draw a border. Note that for each pixel I have not only an IF statement, but also OR checks.
And it's still fast as blazes.
Finally, I copy back and unlock the bits. And then copy the image to the onscreen surface.

Categories

Resources