Creating 16-bit+ grayscale images in WPF - c#

I want to create a 16-bit grayscale image from data values in my WPF program. Currently I have been looking at using a WriteableBitmap with PixelFormats.Gray16 set.
However I can't get this to work, and a Microsoft page (http://msdn.microsoft.com/en-us/magazine/cc534995.aspx) lists the Gray16 format as not writeable via the WriteableBitmap, but does not suggest how else to make one in this way.
Currently my code operates within a loop, where i represents the image height and j the width, and looks something like this:
short dataValue = GetDataSamplePixelValue(myDataValue);
//the pixel to fill with this data value:
int pixelOffset = ((i * imageWidth) + j) * bytesPerPixel;
//set the pixel colour values:
pixels[pixelOffset] = dataValue;
I do get an image with this but it is just a bunch of vertical black and white lines. I don't have a problem if using just 8-bit grayscale data (in which case in the above example short is changed to byte).
Does anyone know how to create a 16-bit per pixel or higher grayscale image using WPF? This image will ultimately need to be saved as well.
Any advice is much appreciated.
EDIT
Further to this I have done some editing and am now getting a sensible image using the Gray16 PixelFormat. It's very difficult for me to tell if it is actually 16-bit though, as a colour count by an image program gives 256, and I am not sure if this is because the image is being constrained by WPF, or perhaps the image program does not support it as apparently many image programs ignore the lower 8-bits. For now I will stick with what I have.
For information the code is like this:
myBitmap = new WriteableBitmap((int)visualRect.Width, (int)visualRect.Height, 96, 96, PixelFormats.Gray16, null);
int bytesPerPixel = myBitmap.Format.BitsPerPixel / 8;
ushort[] pixels = new ushort[(int)myBitmap.PixelWidth * (int)myBitmap.PixelHeight];
//if there is a shift factor, set the background colour to white:
if (shiftFactor > 0)
{
for (int i = 0; i < pixels.Length; i++)
{
pixels[i] = 255;
}
}
//the area to be drawn to:
Int32Rect drawRegionRect = new Int32Rect(0, 0, (int)myBitmap.PixelWidth, (int)myBitmap.PixelHeight);
//the number of samples available at this line (reduced by one so that the picked sample can't lie beyond the array):
double availableSamples = myDataFile.DataSamples.Length - 1;
for (int i = 0; i < numDataLinesOnDisplay; i++)
{
//the current line to use:
int currentLine = ((numDataLinesOnDisplay - 1) - i) + startLine < 0 ? 0 : ((numDataLinesOnDisplay- 1) - i) + startLine;
for (int j = 0; j < myBitmap.PixelWidth; j++)
{
//data sample to use:
int sampleToUse = (int)(Math.Floor((availableSamples / myBitmap.PixelWidth) * j));
//get the data value:
ushort dataValue = GetDataSamplePixelValue(sampleToUse);
//the pixel to fill with this data value:
int pixelOffset = (((i + shiftFactor) * (int)myBitmap.PixelWidth) + j);
//set the pixel colour values:
pixels[pixelOffset] = dataValue;
}
}
//copy the byte array into the image:
int stride = myBitmap.PixelWidth * bytesPerPixel;
myBitmap.WritePixels(drawRegionRect, pixels, stride, 0);
In this example startLine and shiftFactor are already set, and depend on from which point in the data file the user is viewing, with shiftFactor only non-zero in the cases of a data file smaller than the screen, in which case I am centering the image vertically using this value.

find bug in your code or display your full code
next example with gray16 image work normal
var width = 300;
var height = 300;
var bitmap = new WriteableBitmap(width, height, 96, 96, PixelFormats.Gray16, null);
var pixels = new ushort[width * height];
for (var y = 0; y < height; ++y)
for (var x = 0; x < width; ++x)
{
var v = (0x10000*2 * x/width + 0x10000 * 3 * y / height);
var isMirror = (v / 0x10000) % 2 == 1;
v = v % 0xFFFF;
if (isMirror)
v = 0xFFFF - v;
pixels[y * width + x] = (ushort)v;
}
bitmap.WritePixels(new Int32Rect(0, 0, width, height), pixels, width *2, 0);
var encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bitmap));
using (var stream = System.IO.File.Create("gray16.png"))
encoder.Save(stream);

For reference, it is unlikely that a screen can display a 16-bit grayscale image, and also, this format is not well supported by Windows. For example, Windows XP cannot even display a 16-bit grayscale image in Photo viewer, though Windows 7+ can (I'm not sure about Vista, I don't have it).
On top of that, the .NET open TIF method will not load a 16-bit grayscale image.
The solution to loading and saving of 16-bit grayscale image, and I would recommend for TIFs in general is LibTIFF. You then have the option of loading the whole TIF, or loading it line by line, among other methods. I recommend loading it line by line, as then you can keep just the data that will be visible on screen, as some TIFs these days get very large, and cannot be held by a single array.
So ultimately, do not worry about displaying 16-bit grayscale on screen, it may be limited by the capabilities of the system / monitor, and the human eye cannot tell the difference between this and 8-bit anyway. If however you need to load or save 16-bit, use LibTIFF.

Related

.NET Bitmap.Load method produce different result on different computers

I try to load JPEG file and delete all black and white pixels from image
C# Code:
...
m_SrcImage = new Bitmap(imagePath);
Rectangle r = new Rectangle(0, 0, m_SrcImage.Width, m_SrcImage.Height);
BitmapData bd = m_SrcImage.LockBits(r, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
//Load Colors
int[] colours = new int[m_SrcImage.Width * m_SrcImage.Height];
Marshal.Copy(bd.Scan0, colours, 0, colours.Length);
m_SrcImage.UnlockBits(bd);
int len = colours.Length;
List<Color> result = new List<Color>(len);
for (int i = 0; i < len; ++i)
{
uint w = ((uint)colours[i]) & 0x00FFFFFF; //Delete alpha-channel
if (w != 0x00000000 && w != 0x00FFFFFF) //Check pixel is not black or white
{
w |= 0xFF000000; //Return alpha channel
result.Add(Color.FromArgb((int)w));
}
}
...
After that I try to find unique colors in List by this code
result.Sort((a, b) =>
{
return a.R != b.R ? a.R - b.R :
a.G != b.G ? a.G - b.G :
a.B != b.B ? a.B - b.B :
0;
});
List<Color> uniqueColors = new List<Color>( result.Count);
Color rgbTemp = result[0];
for (int i = 0; i < len; ++i)
{
if (rgbTemp == result[i])
{
continue;
}
uniqueColors.Add(rgbTemp);
rgbTemp = result[i];
}
uniqueColors.Add(rgbTemp);
And this code produces different results on different machines on same image!
For example, on this image it produces:
43198 unique colors on XP SP3 with .NET version 4
43168 unique colors on Win7 Ultimate with .NEt version 4.5
Minimum test project you can download here. It just opens selected image and produces txt-file with unique colors.
One more fact. Some pixels are read differently on different machines. I compare txt-files with notepad++ and it shows that some pixels have different RGB components. The difference is 1 for each component, e.g.
Win7 pixel: 255 200 100
WinXP pixel: 254 199 99
I have read this post
stackoverflow.com/questions/2419598/why-might-different-computers-calculate-different-arithmetic-results-in-vb-net
(sorry, I haven't enough raiting for normal link).
...but there wasn't information how to fix it.
Project was compiled for .NET 4 Client profile on machine with OS Windows 7 in VS 2015 Commumity Edition.
Wikipedia has this to say about the accuracy requirements for JPEG Decoders:
The encoding description in the JPEG standard does not fix the precision needed for the output compressed image. However, the JPEG standard (and the similar MPEG standards) includes some precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:
a maximum of one bit of difference for each pixel component
low mean square error over each 8×8-pixel block
very low mean error over each 8×8-pixel block
very low mean square error over the whole image
extremely low mean error over the whole image
(my emphasis)
In short, there is simply two different decoder implementations at play here, and they produce different images, within the accuracy requirement (1 bit = +/- 1 in the component values, as you observed).
So short of using the same (non-built-in) jpeg decoder, this is to be expected. If you need to have the exact same output then you probably need to switch to a different decoder, one that will be the same no matter which .NET version or Windows you're running this on. I'm guessing that GDI+ is the culprit here as this has undergone larger changes since Windows XP.
I solve my problem by adding Libjpeg.NET to project and write this code:
private Bitmap JpegToBitmap(JpegImage jpeg)
{
int width = jpeg.Width;
int height = jpeg.Height;
// Read the image into the memory buffer
int[] raster = new int[height * width];
for(int i = 0; i < height; ++i)
{
byte[] temp = jpeg.GetRow(i).ToBytes();
for (int j = 0; j < temp.Length; j += 3)
{
int offset = i*width + j / 3;
raster[offset] = 0;
raster[offset] |= (((int)temp[j+2]) << 16);
raster[offset] |= (((int)temp[j+1]) << 8);
raster[offset] |= (int)temp[j];
}
}
Bitmap bmp = new Bitmap(width, height, PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData bmpdata = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
byte[] bits = new byte[bmpdata.Stride * bmpdata.Height];
for (int y = 0; y < bmp.Height; y++)
{
int rasterOffset = y * bmp.Width;
int bitsOffset = (bmp.Height - y - 1) * bmpdata.Stride;
for (int x = 0; x < bmp.Width; x++)
{
int rgba = raster[rasterOffset++];
bits[bitsOffset++] = (byte)((rgba >> 16) & 0xff);
bits[bitsOffset++] = (byte)((rgba >> 8) & 0xff);
bits[bitsOffset++] = (byte)(rgba & 0xff);
}
}
System.Runtime.InteropServices.Marshal.Copy(bits, 0, bmpdata.Scan0, bits.Length);
bmp.UnlockBits(bmpdata);
return bmp;
}
So, that's enough for me.

C# How do I convert my get GetPixel / SetPixel color processing to Lockbits?

EDIT: I deeply appreciate the replies. What I need more than anything here is sample code for what I do with the few lines of code in the nested loop, since that's what works right in GetPixel/SetPixel, but also what I can't get to work right using Lockbits. Thank you
I'm trying to convert my image processing filters from GetPixel / SetPixel to Lockbits, to improve processing time. I have seen Lockbits tutorials here on Stack Overflow, MSDN, and other sites as well, but I'm doing something wrong. I'm starting with an exceedingly simple filter, which simply reduces green to create a red and purple effect. Here's my code:
private void redsAndPurplesToolStripMenuItem_Click(object sender, EventArgs e)
{
// Get bitmap from picturebox
Bitmap bmpMain = (Bitmap)pictureBoxMain.Image.Clone();
// search through each pixel via x, y coordinates, examine and make changes. Dont let values exceed 255 or fall under 0.
for (int y = 0; y < bmpMain.Height; y++)
for (int x = 0; x < bmpMain.Width; x++)
{
bmpMain.GetPixel(x, y);
Color c = bmpMain.GetPixel(x, y);
int myRed = c.R, myGreen = c.G, myBlue = c.B;
myGreen -= 128;
if (myGreen < 0) myGreen = 0;
bmpMain.SetPixel(x, y, Color.FromArgb(255, myRed, myGreen, myBlue));
}
// assign the new bitmap to the picturebox
pictureBoxMain.Image = (Bitmap)bmpMain;
// Save a copy to the HD for undo / redo.
string myString = Environment.GetEnvironmentVariable("temp", EnvironmentVariableTarget.Machine);
pictureBoxMain.Image.Save(myString + "\\ColorAppRedo.png", System.Drawing.Imaging.ImageFormat.Png);
}
So that GetPixel / SetPixel code works fine, but it's slow. So I tried this:
private void redsAndPurplesToolStripMenuItem_Click(object sender, EventArgs e)
{
// Get bitmap from picturebox
Bitmap bmpMain = (Bitmap)pictureBoxMain.Image.Clone();
Rectangle rect = new Rectangle(Point.Empty, bmpMain.Size);
BitmapData bmpData = bmpMain.LockBits(rect, ImageLockMode.ReadOnly, bmpMain.PixelFormat);
// search through each pixel via x, y coordinates, examine and make changes. Dont let values exceed 255 or fall under 0.
for (int y = 0; y < bmpMain.Height; y++)
for (int x = 0; x < bmpMain.Width; x++)
{
bmpMain.GetPixel(x, y);
Color c = new Color();
int myRed = c.R, myGreen = c.G, myBlue = c.B;
myGreen -= 128;
if (myGreen < 0) myGreen = 0;
bmpMain.SetPixel(x, y, Color.FromArgb(255, myRed, myGreen, myBlue));
}
bmpMain.UnlockBits(bmpData);
// assign the new bitmap to the picturebox
pictureBoxMain.Image = (Bitmap)bmpMain;
// Save a copy to the HD for undo / redo.
string myString = Environment.GetEnvironmentVariable("temp", EnvironmentVariableTarget.Machine);
pictureBoxMain.Image.Save(myString + "\\ColorAppRedo.png", System.Drawing.Imaging.ImageFormat.Png);
}
Which throws the error "An unhandled exception of type 'System.InvalidOperationException' occurred in System.Drawing.dll Additional information: Bitmap region is already locked" when it reaches the first line of the nested loop.
I realize this has to be a beginner's error, I'd appreciate if someone could demonstrate the correct way to convert this very simple filter to Lockbits. Thank you very much
The array returned by scan0 is in this format BGRA BGRA BGRA BGRA ... and so on,
where B = Blue, G = Green, R = Red, A = Alpha.
Example of a very small bitmap 4 pixels wide and 3 pixels height.
BGRA BGRA BGRA BGRA
BGRA BGRA BGRA BGRA
BGRA BGRA BGRA BGRA
stride = width*bytesPerPixel = 4*4 = 16 bytes
height = 3
maxLenght = stride*height= 16*3 = 48 bytes
To reach a certain pixel in the image (x, y) use this formula
int certainPixel = bytesPerPixel*x + stride * y;
B = scan0[certainPixel + 0];
G = scan0[certainPixel + 1];
R = scan0[certainPixel + 2];
A = scan0[certainPixel + 3];
public unsafe void Test(Bitmap bmp)
{
int width = bmp.Width;
int height = bmp.Height;
//TODO determine bytes per pixel
int bytesPerPixel = 4; // we assume that image is Format32bppArgb
int maxPointerLenght = width * height * bytesPerPixel;
int stride = width * bytesPerPixel;
byte R, G, B, A;
BitmapData bData = bmp.LockBits(
new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite, bmp.PixelFormat);
byte* scan0 = (byte*)bData.Scan0.ToPointer();
for (int i = 0; i < maxPointerLenght; i += 4)
{
B = scan0[i + 0];
G = scan0[i + 1];
R = scan0[i + 2];
A = scan0[i + 3];
// do anything with the colors
// Set the green component to 0
G = 0;
// do something with red
R = R < 54 ? (byte)(R + 127) : R;
R = R > 255 ? 255 : R;
}
bmp.UnlockBits(bData);
}
You can test is yourself. Create a very small bitmap ( few pixels wide/height) in paint or any other program and put a breakpoint at the begining of the method.
Additional information: Bitmap region is already locked"
You now know why GetPixel() is slow, it also uses Un/LockBits under the hood. But does so for each individual pixel, the overhead steals cpu cycles. A bitmap can be locked only once, that's why you got the exception. Also the basic reason that you can't access a bitmap in multiple threads simultaneously.
The point of LockBits is that you can access the memory occupied by the bitmap pixels directly. The BitmapData.Scan0 member gives you the memory address. Directly addressing the memory is very fast. You'll however have to work with an IntPtr, the type of Scan0, that requires using a pointer or Marshal.Copy(). Using a pointer is the optimal way, there are many existing examples on how to do this, I won't repeat it here.
... = bmpMain.LockBits(rect, ImageLockMode.ReadOnly, bmpMain.PixelFormat);
The last argument you pass is very, very important. It selects the pixel format of the data and that affects the code you write. Using bmpMain.PixelFormat is the fastest way to lock but it is also very inconvenient. Since that now requires you to adapt your code to the specific pixel format. There are many, take a good look at the PixelFormat enum. They differ in the number of bytes taken for each pixel and how the colors are encoded in the bits.
The only convenient pixel format is Format32bppArgb, every pixel takes 4 bytes, the color/alpha is encoded in a single byte and you can very easily and quickly address the pixels with an uint*. You can still deal with Format24bppRgb but you now need a byte*, that's a lot slower. The ones that have a P in the name are pre-multiplied formats, very fast to display but exceedingly awkward to deal with. You may thus be well ahead by taking the perf hit of forcing LockBits() to convert the pixel format. Paying attention to the pixel format up front is important to avoid this kind of lossage.

Create Image Mask

The user provides my app an image, from which the app needs to make a mask:
The mask contains a red pixel for each transparent pixel in the original image.
I tried the following:
Bitmap OrgImg = Image.FromFile(FilePath);
Bitmap NewImg = new Bitmap(OrgImg.Width, OrgImg.Height);
for (int y = 0; y <= OrgImg.Height - 1; y++) {
for (int x = 0; x <= OrgImg.Width - 1; x++) {
if (OrgImg.GetPixel(x, y).A != 255) {
NewImg.SetPixel(x, y, Color.FromArgb(255 - OrgImg.GetPixel(x, y).A, 255, 0, 0));
}
}
}
OrgImg.Dispose();
PictureBox1.Image = NewImg;
I am worried about the performance on slow PCs. Is there a better approach to do this?
It is perfectly acceptable to use GetPixel() if it is only used sporadicly, e.g. on loading one image. However, if you want to do a more serious image processing, it is better to work directly with BitmapData. A small example:
//Load the bitmap
Bitmap image = (Bitmap)Image.FromFile("image.png");
//Get the bitmap data
var bitmapData = image.LockBits (
new Rectangle (0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat
);
//Initialize an array for all the image data
byte[] imageBytes = new byte[bitmapData.Stride * image.Height];
//Copy the bitmap data to the local array
Marshal.Copy(bitmapData.Scan0,imageBytes,0,imageBytes.Length);
//Unlock the bitmap
image.UnlockBits(bitmapData);
//Find pixelsize
int pixelSize = Image.GetPixelFormatSize(image.PixelFormat);
// An example on how to use the pixels, lets make a copy
int x = 0;
int y = 0;
var bitmap = new Bitmap (image.Width, image.Height);
//Loop pixels
for(int i=0;i<imageBytes.Length;i+=pixelSize/8)
{
//Copy the bits into a local array
var pixelData = new byte[3];
Array.Copy(imageBytes,i,pixelData,0,3);
//Get the color of a pixel
var color = Color.FromArgb (pixelData [0], pixelData [1], pixelData [2]);
//Set the color of a pixel
bitmap.SetPixel (x,y,color);
//Map the 1D array to (x,y)
x++;
if( x >= bitmap.Width)
{
x=0;
y++;
}
}
//Save the duplicate
bitmap.Save ("image_copy.png");
This approach is indeed slow. A better approach would be using Lockbits and access the underlying matrix directly. Take a look at https://web.archive.org/web/20141229164101/http://bobpowell.net/lockingbits.aspx or http://www.mfranc.com/programming/operacje-na-bitmapkach-net-1/ or https://learn.microsoft.com/en-us/dotnet/api/system.drawing.bitmap.lockbits or other articles about lockbits in StackOverflow.
It's a tiny bit more complex since you'll have to work with bytes directly (4 per pixel if you're working with RGBA), but the performance boost is significant and is well worth it.
Another note - OrgImg.GetPixel(x, y) is slow, if you're sticking with this (and not lockbits) make sure you only use it once (it may be already optimized, just check if there's a difference).

Reducing color depth in an image is not reducin the file size?

I use this code to reduce the depth of an image:
public void ApplyDecreaseColourDepth(int offset)
{
int A, R, G, B;
Color pixelColor;
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
A = pixelColor.A;
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);
if (G < 0)
{
G = 0;
}
B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);
if (B < 0)
{
B = 0;
}
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
first question is: the offset that I give the function is not the depth, is that right?
the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.
This is the code that I use to save the modified image:
private Bitmap bitmapImage;
public void SaveImage(string path)
{
bitmapImage.Save(path);
}
You are just setting the pixel values to a lower level.
For example, is if a pixel is represented by 3 channels with 16 bits per channel, you are reducing each pixel colour value to 8-bits per channel. This will never reduce the image size as the pixels allocated have already a fixed depth of 16 bits.
Try saving the new values to a new image with maximum of 8-bit depth.
Surely you will have a reduced image in bytes but not the overall size that is, X,Y dimensions of the image will remain intact. What you are doing will reduce image quality.
Let's start by cleaning up the code a bit. The following pattern:
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
Is equivalent to this:
R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
You can thus simplify your function to this:
public void ApplyDecreaseColourDepth(int offset)
{
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
int pixelColor = bitmapImage.GetPixel(x, y);
int A = pixel.A;
int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
To answer your questions:
Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).
First i would like to ask you one simple question :)
int i = 10;
and now i = i--;
douse it effect on size of i ?
ans is No.
you are doing the same thing
Index imaged are represent in two matrix
1 for color mapping and
2 fro image mapping
you just change the value of element not deleting it
so it will not effect on size of image
You can't decrease color depth with Get/SetPixel. Those methods only change the color.
It seems you can't easily save an image to a certain pixel format, but I did find some code to change the pixel format in memory. You can try saving it, and it might work, depending what format you save to.
From this question: https://stackoverflow.com/a/2379838/785745
He gives this code to change color depth:
public static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
You can change the PixelFormat in the code to whatever you need.
A Bitmap image of a certain pixel count is always the same size, because the bitmap format does not apply compression.
If you compress the image with an algorithm (e.g. JPEG) then the 'reduced' image should be smaller.
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2))
Doesn't this always return 0?
If you want to reduce the size of your image, you can specify a different compression format when calling Image.Save().
GIF file format is probably a good candidate, since it works best with contiguous pixels of identical color (which happens more often when your color depth is low).
JPEG works great with photos, but you won't see significant results if you convert a 24-bit image into a 16-bit one and then compresses it using JPEG, because of the way the algorithm works (you're better off saving the 24-bit pictures as JPEG directly).
And as others have explained, your code won't reduce the size used by the Image object unless you actually copy the resulting data into another Bitmap object with a different PixelFormat such as Format16bppRgb555.

How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per pixel) to a bitmap arranged as 320x240 characters

I am communicating with an instrument (remote controlling it) and
one of the things I need to do is to draw the instrument screen.
In order to get the screen I issue a command and the instrument
replies with an array of bytes that represents the screen.
Below is what the instrument manual has to say about converting the response to the actual screen:
The command retrieves the framebuffer data used for the display.
It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as
320x240 characteres.
The data is sent in RLE encoded form.
To convert this data into a BMP for use in Windows, it needs to be
turned into a 4BPP. Also note that BMP files are upside down relative
to this data, i.e. the top display line is the last line in the BMP.
I managed to unpack the data, but now I am stuck on how to actually
go from the unpacked byte array to a bitmap.
My background on this is pretty close to zero and my searches
have not revealed much either.
I am looking for directions and/or articles I could use to help me
undestand how to get this done.
Any code or even pseudo code would also help. :-)
So, just to summarize it all:
How to convert a byte array of 19200 bytes in size, where
each byte represents 4 pixels (2 bits per pixel),
to a bitmap arranged as 320x240 characters.
Thanks in advance.
To do something like this, you'll want a routine like this:
Bitmap ConvertToBitmap(byte[] data, int width, int height)
{
Bitmap bm = new Bitmap(width, height, PixelFormat.Format24bppRgb);
for (int y=0; y < height; y++) {
for (int x=0; x < width; x++) {
int value = ReadPixelValue(data, x, y, width);
Color c = ConvertValToColor(value);
bm.SetPixel(x, y, c);
}
}
return bm;
}
from here, you need ReadPixelValue and ConvertValToColor.
static int ReadPixelValue(byte[] data, int x, int y, width)
{
int pixelsPerByte = 4;
// added the % pixelsPerByte to deal with width not being a multiple of pixelsPerByte,
// which won't happen in your case, but will in the general case
int bytesPerLine = width / pixelsPerByte + (width % pixelsPerByte != 0 ? 1 : 0);
int index = y * bytesPerLine + (x / pixelsPerByte);
byte b = data[index];
int pixelIndex = (x % pixelsPerByte) * 2;
// if every 4 pixels are reversed, try this:
// int pixelIndex = 8 - (x % pixelsPerByte) * 2;
return ((int b) >> pixelIndex) & 0x3;
}
Basically, I pull each set of two bits out of each byte and return it as an int.
As for converting to color that's up to you how to make heads or tail of the 4 values that come back.
Most likely you can do something like this:
static Color[] _colors = new Color[] { Color.Black, Color.Red, Color.Blue, Color.White };
static Color ConvertValToColor(int val)
{
if (val < 0 || val > _colors.Length)
throw new ArgumentOutOfRangeException("val");
return _colors[val];
}
If you have two bits per pixel, for each pixel you have 4 different possible colors. Probably the colors are indexed or just hardcoded (i.e. 0 means black, 1 white, etc).
Don't know if this is of much help ( I don't know what bitmap object you are using, but perhaps it has a regular RGB or ARGB scheme with 1 byte per channel), but in pseudo-actionscript, I think you should do something like this.
// 80 -> 320 / 4
for(var x:int = 0; x < 80; x++) {
for(var y:int = 0; y < 240; y++) {
var byteVal:int = readByte();
var px_1:int = (byteVal >> 6) & 0x03;
var px_2:int = (byteVal >> 4) & 0x03;
var px_3:int = (byteVal >> 2) & 0x03;
var px_4:int = (byteVal) & 0x03;
// map your pixel value to ARGB
px_1 = getPixelValue(px_1);
px_2 = getPixelValue(px_2);
px_3 = getPixelValue(px_3);
px_4 = getPixelValue(px_4);
// assuming setPixel(x,y,pixelValue)
setPixel((x * 4), y, px_1);
setPixel((x * 4) + 1, y, px_2);
setPixel((x * 4) + 2, y, px_3);
setPixel((x * 4) + 3, y, px_4);
}
}
function getPixelValue(idx:int):uint {
// just an example...
switch(idx) {
case 0: return 0xff000000; // black
case 1: return 0xffffffff; // white
case 2: return 0xffff0000; // red
case 3: return 0xff0000ff; // blue
}
}
The above code, suffice it to say, is just to give you an idea (hopefully!) and is based on some assumptions like how these four pixels are stored in a byte.
Hope it makes sense.
I dont know if this helps, I use this for data I got from a rare old hardware:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
byte[] imageBytes = new byte[19201];
//(Fill it with the data from the unit before doing the rest).
Bitmap bmp_workarea = new Bitmap(320, 240, System.Drawing.Imaging.PixelFormat.Format4bppIndexed);
Image newImage = Image.FromStream(new MemoryStream(imageBytes));
using (Graphics gr = Graphics.FromImage(bmp_workarea))
{
gr.DrawImage(newImage, new Rectangle(0, 0, bmp_workarea.Width, bmp_workarea.Height));
}
//now you can use newImage, for example picturebox1.image=newimage
}
}
}

Categories

Resources