I've been working on a bitmap decoder, but my algorithm for processing the pixel data doesn't seem to be quite right:
public IntPtr ReadPixels(Stream fs, int offset, int width, int height, int bpp)
{
IntPtr bBits;
int pixelCount = bpp * width * height;
int Row = 0;
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
int ArraySize = RowSize * Math.Abs(height);
int Col = 0;
Byte[] BMPData = new Byte[ArraySize];
BinaryReader r = new BinaryReader(fs);
r.BaseStream.Seek(offset, SeekOrigin.Begin);
while (Row < height)
{
Byte ReadByte;
if (!(Col >= RowSize))
{
ReadByte = r.ReadByte();
BMPData[(Row * RowSize) + Col] = ReadByte;
Col += 1;
}
if (Col >= RowSize)
{
Col = 0;
Row += 1;
}
}
bBits = System.Runtime.InteropServices.Marshal.AllocHGlobal(BMPData.Length);
System.Runtime.InteropServices.Marshal.Copy(BMPData, 0, bBits, BMPData.Length);
return bBits;
}
I can process only monochrome bitmaps and on some, parts of the bitmap is processed fine. None are compressed and they are rendered upside down and flipped around. I really could do with some help on this one.
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
That isn't correct. Your RowSize variable is actually called "stride". You compute it like this:
int bytes = (width * bitsPerPixel + 7) / 8;
int stride = 4 * ((bytes + 3) / 4);
You are ignoring the stride.
Image rows can be padded to the left with additional Bytes to make their size divide by a number such as (1 = no padding, 2, 4, 8 = default for many images, 16, ...).
Also, images can be a rectangle region within a larger image, making the "padding" between lines in the smaller image even larger (since the stride is the larger image's stride). - In this case the image can also have an offset for its start point within the buffer.
Better practice is:
// Overload this method 3 time for different bit per SUB-pixel values (8, 16, or 32)
// = (byte, int, float)
// SUB-pixel != pixel (= 1 3 or 4 sub-pixels (grey or RGB or BGR or BGRA or RGBA or ARGB or ABGR)
unsafe
{
byte[] buffer = image.Buffer;
int stride = image.buffer.Length / image.PixelHeight;
// or int stride = image.LineSize; (or something like that)
fixed (float* regionStart = (float*)(void*)buffer) // or byte* or int* depending on datatype
{
for (int y = 0; y < height; y++) // height in pixels
{
// float* and float or byte* and byte or int* and int
float* currentPos
= regionStart + offset / SizeOf(float) + stride / SizeOf(float) * y;
for (int x = 0; x < width; x++) // width in pixels
{
for (int chan = 0; chan < channel; chan++) // 1, 3 or 4 channels
{
// DO NOT USE DECIMAL - you want accurate image values
// with best performance - primative types
// not a .NET complex type used for nice looking values for users e.g. 12.34
// instead use actual sub pixel type (float/int/byte) or double instead!
var currentValue = value;
currentPos++;
}
}
}
}
}
I find something I don't understand:
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
RowSize, in my opinion, should be (bpp*width) / 8 + (bpp%8==0?0:1)
Related
Modifying the code provided in this link:
Original code
I wrote this:
private void btnLoad_Click(object sender, EventArgs e)
{
if (System.IO.File.Exists(txtPicture.Text))
{
byte[] _data = System.IO.File.ReadAllBytes(txtPicture.Text);
var _rgbData = Convert16BitGrayScaleToRgb16(_data, 160, 120);
var _bmp = CreateBitmapFromBytes(_rgbData, 160, 120);
pbFrame.Image = _bmp;
}
}
private static void Convert16bitGSToRGB(UInt16 color, out byte red, out byte green, out byte blue)
{
red = (byte)(color & 0x31);
green = (byte)((color & 0x7E0) >> 5);
blue = (byte)((color & 0xF800) >> 11);
}
private static byte[] Convert16BitGrayScaleToRgb48(byte[] inBuffer, int width, int height)
{
int inBytesPerPixel = 2;
int outBytesPerPixel = 6;
byte[] outBuffer = new byte[width * height * outBytesPerPixel];
int inStride = width * inBytesPerPixel;
int outStride = width * outBytesPerPixel;
// Step through the image by row
for (int y = 0; y < height; y++)
{
// Step through the image by column
for (int x = 0; x < width; x++)
{
// Get inbuffer index and outbuffer index
int inIndex = (y * inStride) + (x * inBytesPerPixel);
int outIndex = (y * outStride) + (x * outBytesPerPixel);
byte hibyte = inBuffer[inIndex + 1];
byte lobyte = inBuffer[inIndex];
//R
outBuffer[outIndex] = lobyte;
outBuffer[outIndex + 1] = hibyte;
//G
outBuffer[outIndex + 2] = lobyte;
outBuffer[outIndex + 3] = hibyte;
//B
outBuffer[outIndex + 4] = lobyte;
outBuffer[outIndex + 5] = hibyte;
}
}
return outBuffer;
}
private static byte[] Convert16BitGrayScaleToRgb16(byte[] inBuffer, int width, int height)
{
int inBytesPerPixel = 2;
int outBytesPerPixel = 2;
byte[] outBuffer = new byte[width * height * outBytesPerPixel];
int inStride = width * inBytesPerPixel;
int outStride = width * outBytesPerPixel;
// Step through the image by row
for (int y = 0; y < height; y++)
{
// Step through the image by column
for (int x = 0; x < width; x++)
{
// Get inbuffer index and outbuffer index
int inIndex = (y * inStride) + (x * inBytesPerPixel);
int outIndex = (y * outStride) + (x * outBytesPerPixel);
byte hibyte = inBuffer[inIndex];
byte lobyte = inBuffer[inIndex+1];
outBuffer[outIndex] = lobyte;
outBuffer[outIndex+1] = hibyte;
}
}
return outBuffer;
}
private static byte[] Convert16BitGrayScaleToRgb24(byte[] inBuffer, int width, int height)
{
int inBytesPerPixel = 2;
int outBytesPerPixel = 3;
byte[] outBuffer = new byte[width * height * outBytesPerPixel];
int inStride = width * inBytesPerPixel;
int outStride = width * outBytesPerPixel;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int inIndex = (y * inStride) + (x * inBytesPerPixel);
int outIndex = (y * outStride) + (x * outBytesPerPixel);
byte hibyte = inBuffer[inIndex];
byte lobyte = inBuffer[inIndex + 1];
byte r, g, b;
UInt16 color = (UInt16)(hibyte << 8 | lobyte);
Convert16bitGSToRGB(color, out r, out g, out b);
outBuffer[outIndex] = r;
outBuffer[outIndex + 1] = g;
outBuffer[outIndex + 2] = b;
}
}
return outBuffer;
}
private static Bitmap CreateBitmapFromBytes(byte[] pixelValues, int width, int height)
{
//Create an image that will hold the image data
Bitmap bmp = new Bitmap(width, height, PixelFormat.Format16bppRgb565);
//Get a reference to the images pixel data
Rectangle dimension = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData picData = bmp.LockBits(dimension, ImageLockMode.ReadWrite, bmp.PixelFormat);
IntPtr pixelStartAddress = picData.Scan0;
//Copy the pixel data into the bitmap structure
Marshal.Copy(pixelValues, 0, pixelStartAddress, pixelValues.Length);
bmp.UnlockBits(picData);
return bmp;
}
But still the result of the conversion is not satisfying/correct. This's the picture I should get:
Converting the file linked here:
Example RAW16 picture file
This's the result using Convert16BitGrayScaleToRgb48:
This's the result using Convert16BitGrayScaleToRgb16:
This's the result using Convert16BitGrayScaleToRgb24:
It's quite clear that the color remapping is wrong but I cant understand where the problem is.
Additionally I also found that picturebox didn't show exactly what it stores. The second image from the top (Convert16BitGrayScaleToRgb48 result) is what the picturebox shows while the following picture is what I obtain if I save the image shown in PNG format:
I tought RAW16 grayscale should mean 2 bytes containing either a 16 bit gray value or an RGB gray value encoded on a 565 or 555 map. But none of those hypotesis seems to match the real thing.
Someone has an hint on how to convert the value provided in the source file to obtain a picture like the first one (obtained from the same source using ImageJ)?
I found a possibile hint using GIMP. If I load the original file trough this app (changing extension in .data and/or forcing to load it as RAW) and setting it as a 160x120 16bpp BigEndian I got a nearly black frame (!), but if I change levels compressing the range around the only small peak present (around 12,0 black - 13,0 white) the image result correct. Changing endianism is pretty straightforward compressing the dynamic range a little less but I'm working on it.
The first lesson learned in this experience is "Don't trust your eyes" :-).
The final result of my efforts are these three methods:
public static void GetMinMax(byte[] data, out UInt16 min, out UInt16 max, bool big_endian = true)
{
if (big_endian)
min = max = (UInt16)((data[0] << 8) | data[1]);
else
min = max = (UInt16)((data[1] << 8) | data[0]);
for (int i = 0; i < (data.Length - 1); i += 2)
{
UInt16 _value;
if (big_endian)
_value = (UInt16)((data[i] << 8) | data[i + 1]);
else
_value = (UInt16)((data[i + 1] << 8) | data[i]);
if (_value < min)
min = _value;
if (_value > max)
max = _value;
}
}
public static void CompressRange(byte MSB, byte LSB, UInt16 min, UInt16 max, out byte color, Polarity polarity)
{
UInt16 _value = (UInt16)((MSB << 8) | LSB);
_value -= min;
switch (polarity)
{
case Polarity.BlackHot:
_value = (UInt16)((_value * 255) / (max - min));
_value = (UInt16)(255 - _value);
break;
default:
case Polarity.WhiteHot:
_value = (UInt16)((_value * 255) / (max - min));
break;
}
color = (byte)(_value & 0xff);
}
public static byte[] Convert16BitGrayScaleToRgb24(byte[] inBuffer, int width, int height, UInt16 min, UInt16 max, bool big_endian = true, Polarity polarity = Polarity.WhiteHot)
{
int inBytesPerPixel = 2;
int outBytesPerPixel = 3;
byte[] outBuffer = new byte[width * height * outBytesPerPixel];
int inStride = width * inBytesPerPixel;
int outStride = width * outBytesPerPixel;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int inIndex = (y * inStride) + (x * inBytesPerPixel);
int outIndex = (y * outStride) + (x * outBytesPerPixel);
byte hibyte;
byte lobyte;
if (big_endian)
{
hibyte = inBuffer[inIndex];
lobyte = inBuffer[inIndex + 1];
}
else
{
hibyte = inBuffer[inIndex + 1];
lobyte = inBuffer[inIndex];
}
byte gray;
CompressRange(hibyte, lobyte, min, max, out gray, polarity);
outBuffer[outIndex] = gray;
outBuffer[outIndex + 1] = gray;
outBuffer[outIndex + 2] = gray;
}
}
return outBuffer;
}
These allows to load the file attached to the original question and display it on a standard WindowsForm PictureBox. Using 48bpp format will result in a degraded image on some graphic cards (at least on mine).
BTW GetMinMax calculate min max value on the current frame regardless of the history of the environment. This means that if you are going to use this functions to display a picture sequence (as I am) a strong variation of average temperature in the FOV will drive the overall image to a different exposure resulting in loosing some details of the picture. In such cases I suggest to calculate min-max over the current frame but NON to use it in Convert16BitGrayScaleToRgb24 using instead a moving average for both values.
I have a problem. I need to perform this function with lockbits. Please I need help.
public void xPix(Bitmap bmp, int n, Color cx, Color nx)
{
try
{
for (int y = 0; y < bmp.Height; y++)
{
for (int x = 0; x < bmp.Width; x += (n * 2))
{
cx = bmp.GetPixel(x, y);
if (x + n <= bmp.Width - 1) nx = bmp.GetPixel(x + n, y);
bmp.SetPixel(x, y, nx);
if (x + n <= bmp.Width - 1) bmp.SetPixel(x + n, y, cx);
}
}
}
catch { }
}
There were lots of things that didn't make sense to me about your code. I fixed the pieces that were preventing an image from appearing and here is the result. I will explain my changes after the code.
public void xPix(Bitmap bmp, int n, Color cx, Color nx)
{
var img = bmp.LockBits(new Rectangle(Point.Empty, bmp.Size), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
byte[] bmpBytes = new byte[Math.Abs(img.Stride) * img.Height];
System.Runtime.InteropServices.Marshal.Copy(img.Scan0, bmpBytes, 0, bmpBytes.Length);
for (int y = 0; y < img.Height; y++)
{
for (int x = 0; x < img.Width; x+=n*2)
{
cx = Color.FromArgb(BitConverter.ToInt32(bmpBytes, y * Math.Abs(img.Stride) + x * 4));
if (x + n <= img.Width - 1) nx = Color.FromArgb(BitConverter.ToInt32(bmpBytes, y * Math.Abs(img.Stride) + x * 4));
BitConverter.GetBytes(nx.ToArgb()).CopyTo(bmpBytes, y * Math.Abs(img.Stride) + x * 4);
if (x + n <= img.Width - 1) BitConverter.GetBytes(cx.ToArgb()).CopyTo(bmpBytes, y * Math.Abs(img.Stride) + (x + n) * 4);
}
}
System.Runtime.InteropServices.Marshal.Copy(bmpBytes, 0, img.Scan0, bmpBytes.Length);
bmp.UnlockBits(img);
}
protected override void OnClick(EventArgs e)
{
base.OnClick(e);
Bitmap bmp = new Bitmap(#"C:\Users\bluem\Downloads\Default.png");
for (int i = 0; i < bmp.Width; i++)
{
xPix(bmp, new Random().Next(20) + 1, System.Drawing.Color.White, System.Drawing.Color.Green);
}
Canvas.Image = bmp;
}
There's no such class as LockBitmap so I replaced it with the result of a call to Bitmap.LockBits directly.
The result of LockBits does not include functions for GetPixel and SetPixel, so I did what one normally does with the result of LockBits (see https://learn.microsoft.com/en-us/dotnet/api/system.drawing.bitmap.lockbits?view=netframework-4.7.2) and copied the data into a byte array instead.
When accessing the byte data directly, some math must be done to convert the x and y coordinates into a 1-dimensional coordinate within the array of bytes, which I did.
When accessing the byte data directly under the System.Drawing.Imaging.PixelFormat.Format32bppArgb pixel format, multiple bytes must be accessed to convert between byte data and a pixel color, which I did with BitConverter.GetBytes, BitConverter.ToInt32, Color.FromArgb and Color.ToArgb.
I don't think it's a good idea to be changing the Image in the middle of painting it. You should either be drawing the image directly during the Paint event, or changing the image outside the Paint event and allowing the system to draw it. So I used the OnClick of my form to trigger the function instead.
The first random number I got was 0, so I had to add 1 to avoid an endless loop.
The cx and nx parameters never seem to be used as inputs, so I put arbitrary color values in for them. Your x and y variables were not defined/declared anywhere.
If you want faster on-image-action, you can use Marshall.Copy method with Parallel.For
Why dont use GetPixel method? Because every time you call it, your ALL image is loaded to memory. GetPixel get one pixel, and UNLOAD all image. And in every iteration, ALL image is loaded to memory (for example, if u r working on 500x500 pix image, GetPixel will load 500x500 times whole pixels to memory). When you work on images with C# (CV stuff), work on raw bytes from memory.
I will show how to use with Lockbits in Binarization because its easy to explain.
int pixelBPP = Image.GetPixelFormatSize(resultBmp.PixelFormat) / 8;
unsafe
{
BitmapData bmpData = resultBmp.LockBits(new Rectangle(0, 0, resultBmp.Width, resultBmp.Height), ImageLockMode.ReadWrite, resultBmp.PixelFormat);
byte* ptr = (byte*)bmpData.Scan0; //addres of first line
int height = resultBmp.Height;
int width = resultBmp.Width * pixelBPP;
Parallel.For(0, height, y =>
{
byte* offset = ptr + (y * bmpData.Stride); //set row
for(int x = 0; x < width; x = x + pixelBPP)
{
byte value = (offset[x] + offset[x + 1] + offset[x + 2]) / 3 > threshold ? Byte.MaxValue : Byte.MinValue;
offset[x] = value;
offset[x + 1] = value;
offset[x + 2] = value;
if (pixelBPP == 4)
{
offset[x + 3] = 255;
}
}
});
resultBmp.UnlockBits(bmpData);
}
Now, example with Marshall.copy:
BitmapData bmpData = resultBmp.LockBits(new Rectangle(0, 0, resultBmp.Width, resultBmp.Height),
ImageLockMode.ReadWrite,
resultBmp.PixelFormat
);
int bytes = bmpData.Stride * resultBmp.Height;
byte[] pixels = new byte[bytes];
Marshal.Copy(bmpData.Scan0, pixels, 0, bytes); //loading bytes to memory
int height = resultBmp.Height;
int width = resultBmp.Width;
Parallel.For(0, height - 1, y => //seting 2s and 3s
{
int offset = y * stride; //row
for (int x = 0; x < width - 1; x++)
{
int positionOfPixel = x + offset + pixelFormat; //remember about pixel format!
//do what you want with pixel
}
}
});
Marshal.Copy(pixels, 0, bmpData.Scan0, bytes); //copying bytes to bitmap
resultBmp.UnlockBits(bmpData);
Remember, when you warking with RAW bytes very important is to remember about PixelFormat. If you work on RGBA image, you need to set up every channel. (for example offset + x + pixelFormat). I showed it in Binarization example, how to deak with RGBA image with raw data. If lockbits are not fast enough, use Marshall.Copy
public byte[] CropImage(byte[] bmp, Rectangle cropSize, int stride)
{
//make a new byte array the size of the area of cropped image
int totalSize = cropSize.Width * 3 * cropSize.Height;
int totalLength = bmp.Length;
int startingPoint = (stride * cropSize.Y) + cropSize.X * 3;
byte[] croppedImg = new byte[totalSize];
//for the total size of the old array
for(int y = 0; y<totalLength; y+= stride)
{
//copy a row of pixels from bmp to croppedImg
Array.Copy(bmp, startingPoint + y, croppedImg, y, cropSize.Width*3);
}
return croppedImg;
}
Array.Copy is being skipped over and not copying anything.
I thought maybe I made a mistake, but even when copying each byte manually it does the same thing.
This function takes in a raw BGR image byte array[] and crops it based on Rect(x, y, width, height).
Finally returning the cropped byte array to the main function.
Here
for(int y = 0; y<totalLength; y+= stride)
{
//copy a row of pixels from bmp to croppedImg
Array.Copy(bmp, startingPoint + y, croppedImg, y, cropSize.Width*3);
}
you pass y to Array.Copy method argument which is supposed to be a destinationIndex, which is not true in your case.
In order to avoid such mistakes, use better names for your variables (and use more variables, they are cheap). For instance, the code could be like this
public byte[] CropImage(byte[] source, Rectangle cropRect, int sourceStride)
{
int targetStride = cropRect.Width * 3;
var target = new byte[cropRect.Height * targetStride];
int sourcePos = cropRect.Y * sourceStride + cropRect.X * 3;
int targetPos = 0;
for (int i = 0; i < cropRect.Height; i++)
{
Array.Copy(source, sourcePos, target, targetPos, targetStride);
sourcePos += sourceStride;
targetPos += targetStride;
}
return target;
}
I have as input a Ushort array of image data. Other inputs are gathered here, such as the 'Width, Height'. The ushort array also carries a Min and Max values, that I want to use, those are stored in 'io_current'.
I want to return a Format8ppIndexed Bitmap, and I have this code but what am I doing wrong?:
private Bitmap CreateBitmap(ushort[,] pixels16)
{
int width = pixels16.GetLength(1);
int height = pixels16.GetLength(0);
Bitmap bmp = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
BitmapData bmd = bmp.LockBits(new Rectangle(0, 0, width, height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
// This 'unsafe' part of the code populates the bitmap bmp with data stored in pixel16.
// It does so using pointers, and therefore the need for 'unsafe'.
unsafe
{
//int pixelSize = 4;
int i, j; //, j1; //, i1;
byte b;
ushort sVal;
double lPixval;
//The array has max and min constraints
int distance = io_current.MaximumValue - io_current.MinimumValue;
for (i = 0; i < bmd.Height; ++i)
{
byte* row = (byte*)bmd.Scan0 + (i * bmd.Stride);
//i1 = i * bmd.Height;
for (j = 0; j < bmd.Width; ++j)
{
sVal = (ushort)(pixels16[i, j]);
lPixval = ((sVal - io_current.MinimumValue) * 255) / distance; // Convert to a 255 value range
//lPixval = ((sVal - io_current.MinimumValue) / distance) * 255;
//lPixval = 255 - lPixval; //invert the value
if (lPixval > 255) lPixval = 255;
if (lPixval < 0) lPixval = 0;
b = (byte)(lPixval);
//j1 = j * pixelSize; //Pixelsize is one
row[j] = b; // Just one in 8bpp
//Not necessary for format8bppindexed
//row[j1] = b; // Red
//row[j1 + 1] = b; // Green
//row[j1 + 2] = b; // Blue
//row[j1 + 3] = 255; //No Alpha channel in 24bit
}
}
}
bmp.UnlockBits(bmd);
return bmp;
}
I'm getting a Black screen, or a one color screen. Basically no usable data is returned. Obviously from the comments. I tried to convert this from 24bit bitmap code and thought it would be easy.
I'm trying to determine the optimal way to flip an image across the Y axis. For every pixel, there are 4 bytes, and each set of 4 bytes needs to remain together in order but get shifted. Here's the best I've come up with so far.
This only takes .1-.2s for a 1280x960 image, but with video such performance is crippling. Any suggestions?
Initial implementation
private void ReverseFrameInPlace(int width, int height, int bytesPerPixel, ref byte[] framePixels)
{
System.Diagnostics.Stopwatch s = System.Diagnostics.Stopwatch.StartNew();
int stride = width * bytesPerPixel;
int halfStride = stride / 2;
int byteJump = bytesPerPixel * 2;
int length = stride * height;
byte pix;
for (int i = 0, a = stride, b = stride - bytesPerPixel;
i < length; i++)
{
if (b % bytesPerPixel == 0)
{
b -= byteJump;
}
if (i > 0 && i % halfStride == 0)
{
i = a;
a += stride;
b = a - bytesPerPixel;
if (i >= length)
{
break;
}
}
pix = framePixels[i];
framePixels[i] = framePixels[b];
framePixels[b++] = pix;
}
s.Stop();
System.Console.WriteLine("ReverseFrameInPlace: {0}", s.Elapsed);
}
Revision #1
Revised with indexes and Buffer.BlockCopy per SLaks and Alexei. Also added a Parallel.For since the indexes allow for it.
int[] pixelIndexF = null;
int[] pixelIndexB = null;
private void ReverseFrameInPlace(int width, int height, int bytesPerPixel, byte[] framePixels)
{
System.Diagnostics.Stopwatch s = System.Diagnostics.Stopwatch.StartNew();
if (pixelIndexF == null)// || pixelIndex.Length != (width * height))
{
int stride = width * bytesPerPixel;
int length = stride * height;
pixelIndexF = new int[width * height / 2];
pixelIndexB = new int[width * height / 2];
for (int i = 0, a = stride, b = stride, index = 0;
i < length; i++)
{
b -= bytesPerPixel;
if (i > 0 && i % (width / 2 )== 0)
{
//i = a;
i += width / 2;
a += stride;
b = a - bytesPerPixel;
if (index >= pixelIndexF.Length)
{
break;
}
}
pixelIndexF[index] = i * bytesPerPixel;
pixelIndexB[index++] = b;
}
}
Parallel.For(0, pixelIndexF.Length, new Action<int>(delegate(int i)
{
byte[] buffer = new byte[bytesPerPixel];
Buffer.BlockCopy(framePixels, pixelIndexF[i], buffer, 0, bytesPerPixel);
Buffer.BlockCopy(framePixels, pixelIndexB[i], framePixels, pixelIndexF[i], bytesPerPixel);
Buffer.BlockCopy(buffer, 0, framePixels, pixelIndexB[i], bytesPerPixel);
}));
s.Stop();
System.Console.WriteLine("ReverseFrameInPlace: {0}", s.Elapsed);
}
Revision #2
private void ReverseFrameInPlace(int width, int height, System.Drawing.Imaging.PixelFormat pixelFormat, byte[] framePixels)
{
System.Diagnostics.Stopwatch s = System.Diagnostics.Stopwatch.StartNew();
System.Drawing.Rectangle imageBounds = new System.Drawing.Rectangle(0,0,width, height);
//create destination bitmap, get handle
System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(width, height, pixelFormat);
System.Drawing.Imaging.BitmapData bitmapData = bitmap.LockBits(imageBounds, System.Drawing.Imaging.ImageLockMode.ReadWrite, bitmap.PixelFormat);
IntPtr ptr = bitmapData.Scan0;
//byte[] to bmap
System.Runtime.InteropServices.Marshal.Copy(framePixels, 0, ptr, framePixels.Length);
bitmap.UnlockBits(bitmapData);
//flip
bitmap.RotateFlip(System.Drawing.RotateFlipType.RotateNoneFlipX);
//get handle for bitmap to byte[]
bitmapData = bitmap.LockBits(imageBounds, System.Drawing.Imaging.ImageLockMode.ReadWrite, bitmap.PixelFormat);
ptr = bitmapData.Scan0;
System.Runtime.InteropServices.Marshal.Copy(ptr, framePixels, 0, framePixels.Length);
bitmap.UnlockBits(bitmapData);
s.Stop();
System.Console.WriteLine("ReverseFrameInPlace: {0}", s.Elapsed);
}
I faced almost the same issue but in my case I needed to flip the image for saving it to an .avi container.
I used the Array.Copy() method instead and suprisingly it seems faster than the others (at least, on my machine). The source image that I used was 720 x 576 pixels with 3 bytes per pixel. This method took between .001 - 0.01 seconds versus about 0.06 seconds for both your revisions.
private byte[] ReverseFrameInPlace2(int stride, byte[] framePixels)
{
System.Diagnostics.Stopwatch s = System.Diagnostics.Stopwatch.StartNew();
var reversedFramePixels = new byte[framePixels.Length];
var lines = framePixels.Length / stride;
for (var line = 0; line < lines; line++)
{
Array.Copy(framePixels, framePixels.Length - ((line + 1) * stride), reversedFramePixels, line * stride, stride);
}
s.Stop();
System.Console.WriteLine("ReverseFrameInPlace2: {0}", s.Elapsed);
return reversedFramePixels;
}
Try calling Buffer.BlockCopy on each range of 4 bytes; that should be faster.
You could parallelize execution on the CPU using any technique or use a pixel shader and do it on the GPU. If you only do that to display flipped video - you would best use DirectX and simply do a transformation on the GPU.
Couple more random things to try and measure:
pre-build array of indexes to copy to for a line (like [12,13,14,15, 8,9,10,11, 4,5,6,7, 0,1,2,3] instead of some complicated ifs executed on each line.
try copying to new destination instead of in-place.
Use one of the many transforms supplied by the .NET library:
http://msdn.microsoft.com/en-us/library/aa970271.aspx
Edit: Here's another example:
http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-rotate
Another option might be to use the XNA framework to do manipulations on images. There is a small example of How to resize and save a Texture2D in XNA?. I have no idea how fast it is, but I could see how it should be pretty fast considering the functions are suppose to be used in games with high fps.