I'm trying to get an bitmap created from raw data to show in WPF, by using an Image and a BitmapSource:
Int32[] data = new Int32[RenderHeight * RenderWidth];
for (Int32 i = 0; i < RenderHeight; i++)
{
for (Int32 j = 0; j < RenderWidth; j++)
{
Int32 index = j + (i * RenderHeight);
if (i + j % 2 == 0)
data[index] = 0xFF0000;
else
data[index] = 0x00FF00;
}
}
BitmapSource source = BitmapSource.Create(RenderWidth, RenderHeight, 96.0, 96.0, PixelFormats.Bgr32, null, data, 0);
RenderImage.Source = source;
However the call to BitmapSource.Create throws an ArgumentException, saying "Value does not fall within the expected range". Is this not the way to do this? Am I not making that call properly?
Your stride is incorrect. Stride is the number of bytes allocated for one scanline of the
bitmap. Thus, use the following:
int stride = ((RenderWidth * 32 + 31) & ~31) / 8;
and replace the last parameter (currently 0) with stride as defined above.
Here is an explanation for the mysterious stride formula:
Fact: Scanlines must be aligned on 32-bit boundaries (reference).
The naive formula for the number of bytes per scanline would be:
(width * bpp) / 8
But this might not give us a bitmap aligned on a 32-bit boundary and (width * bpp) might not even have been divisible by 8.
So, what we do is we force our bitmap to have at least 32 bits in a row (we assume that width > 0):
width * bpp + 31
and then we say that we don't care about the low-order bits (bits 0--4) because we are trying to align on 32-bit boundaries:
(width * bpp + 31) & ~31
and then divide by 8 to get back to bytes:
((width * bpp + 31) & ~31) / 8
The padding can be computed by
int padding = stride - (((width * bpp) + 7) / 8)
The naive formula would be
stride - ((width * bpp) / 8)
But width * bpp might not align on a byte boundary and when it doesn't this formula would over count the padding by a byte. (Think of a 1 pixel wide bitmap using 1 bpp. The stride is 4 and the naive formula would say that the padding is 4 but in reality it is 3.) So we add a little bit to cover the case that width * bpp is not a byte boundary and then we get the correct formula given above.
Related
I'm using Evil-DICOM to construct a 2d image in unity (i.e texture2d). The output pixel values are wrong compared to what I got from other DICOM viewers. I'm new to DICOM development and couldn't figure out what I did wrong. I've been stuck on this for weeks. Any help is appreciated.
I'm using this formula from:
https://www.dabsoft.ch/dicom/3/C.11.2.1.2/
I also read this answer from:
How to Display DICOM images using EvilDICOM in c#?
Known information about the DICOM file I'm using:
Bits Allocated:16
Bits Stored:16
High Bit:15
Rows, Columns:512
Pixel Representation:0 (i.e uncompressed)
Window Center:40
Window Width:350
Rescale Intercept:-1024
Rescale Slope:1
//Convert pixel data to 8 bit grayscale
for (int i = 0; i < pixelData.Count; i += 2)
{
//original data - 16 bits unsigned
ushort pixel = (ushort)(pixelData[i] * 0xFF + pixelData[i + 1]);
double valgray = pixel;
valgray = slope * valgray + intercept; //modality lut
if (valgray <= level - 0.5 - (window - 1)/2)
{
valgray = 0;
}
else if (valgray > level - 0.5 + (window - 1)/2)
{
valgray = 255;
}
else
{
valgray = ((valgray - (level - 0.5)) / (window - 1) + 0.5);
}
//Assign valgray to RGBA
colors[i / 2].r = (byte)(valgray);
colors[i / 2].g = (byte)(valgray);
colors[i / 2].b = (byte)(valgray);
colors[i / 2].a = 0xFF //Alpha = max
}
The left is my output, the right is output from other DICOM viewer
https://drive.google.com/file/d/1IjL48_iZDXAVi4_gzG6fLN3A2td2rwfS/view?usp=sharing
I got the order of bytes in pixeldata inverted. The pixel value should be:
ushort pixel = (ushort)(pixelData[i + 1] * 256 + pixelData[i]);
i would like to set a pixel at a specific point but without using the slow SetPixel() method.
In the regular way,all i have to do is for example :bmp.SetPixel(120, 53, Color.Red);
but now when i use LockBits and pointers, it seems like i have to pass all the image bounds and cant set a pixel in a specific location.
this is my code
private unsafe void ChangePX(Bitmap bmp)
{
bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
IntPtr scan0 = bmData.Scan0;
int stride = bmData.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
for(int y = 0; y < nHeight; y++)
{
byte* p = (byte*)scan0.ToPointer();
p += y * stride;
for (int x = 0; x < nWidth; x++)
{
if (x==120 &&y==53)
{//found the position.
p[0]=255;
p[1]=255;
p[2]=255;
p[3]=1;
}
p += 4;
}
bmp.UnlockBits(bmData);
}
As you see here im looping trough the image to find the specific position but i would like to set it directly without looping.
But i dont want to use the classic SetPixel method because i have to set the pixels in many points.
Any help would be appreciated.
Each row in a bitmap is padded at the end so the next row starts at a 4-byte boundary. bmData.Stride contains the size of such a padded row.
A bitmap also contains header information, such as the pixel format and the size of the image (unpadded). bmData.Scan0 contains the address of the first byte of pixel data.
So considering each row is bmData.Stride bytes wide, row y starts at:
bmData.Scan0 + (y * bmData.Stride)
Each pixel is assumed to be stored in 4 bytes, so to find the first byte of the pixel data for column x in row y, you multiply it by 4:
bmData.Scan0 + (y * bmData.Stride) + (x * 4)
Then you can directly point to that address if you want:
byte* p = (byte*)bmData.Scan0 + (y * bmData.Stride) + (x * 4);
Well, you can directly address a pixel. Given the dimensions and the stride (assuming you always have 4 bytes per pixel), the address of one pixel is:
byte* p = (byte*)scan0.ToPointer();
p += (y * stride) + (x * 4);
Why?
Every row contains stride bytes, so to get to row y you need to add y * stride bytes. Each pixel on x-direction has 4 bytes, so if you add 4 * x you are at pixel x in the current row.
As the subject says, I have a .bmp image and I need to write a code which will be able to get the color of any pixel of the image. It is a 1bpp (indexed) image, so the colour will be either black or white. Here is the code I currently have:
//This method locks the bits of line of pixels
private BitmapData LockLine(Bitmap bmp, int y)
{
Rectangle lineRect = new Rectangle(0, y, bmp.Width, 1);
BitmapData line = bmp.LockBits(lineRect,
ImageLockMode.ReadWrite,
bmp.PixelFormat);
return line;
}
//This method takes the BitmapData of a line of pixels
//and returns the color of one which has the needed x coordinate
private Color GetPixelColor(BitmapData data, int x)
{
//I am not sure if this line is correct
IntPtr pPixel = data.Scan0 + x;
//The following code works for the 24bpp image:
byte[] rgbValues = new byte[3];
System.Runtime.InteropServices.Marshal.Copy(pPixel, rgbValues, 0, 3);
return Color.FromArgb(rgbValues[2], rgbValues[1], rgbValues[0]);
}
But how can I make it work for a 1bpp image? If I read only one byte from the pointer it always has the 255 value, so I assume, I am doing something wrong.
Please, do not suggest to use the System.Drawing.Bitmap.GetPixel method, because it works too slow and I want the code to work as fast as possible.
Thanks in advance.
EDIT:
Here is the code that works fine, just in case someone needs this:
private Color GetPixelColor(BitmapData data, int x)
{
int byteIndex = x / 8;
int bitIndex = x % 8;
IntPtr pFirstPixel = data.Scan0+byteIndex;
byte[] color = new byte[1];
System.Runtime.InteropServices.Marshal.Copy(pFirstPixel, color, 0, 1);
BitArray bits = new BitArray(color);
return bits.Get(bitIndex) ? Color.Black : Color.White;
}
Ok, got it! You need to read the bits from the BitmapData and apply a mask to the bit you want extract the color:
var bm = new Bitmap...
//lock all image bits
var bitmapData = bm.LockBits(new Rectangle(0, 0, bm.Width, bm.Height), ImageLockMode.ReadWrite, PixelFormat.Format1bppIndexed);
// this will return the pixel index in the color pallete
// since is 1bpp it will return 0 or 1
int pixelColorIndex = GetIndexedPixel(50, 30, bitmapData);
// read the color from pallete
Color pixelColor = bm.Pallete.Entries[pixelColorIndex];
And here is the method:
// x, y relative to the locked area
private int GetIndexedPixel(int x, int y, BitmapData bitmapData)
{
var index = y * bitmapData.Stride + (x >> 3);
var chunk = Marshal.ReadByte(bitmapData.Scan0, index);
var mask = (byte)(0x80 >> (x & 0x7));
return (chunk & mask) == mask ? 1 : 0;
}
The pixel position is calculated in 2 rounds:
1) Find the byte where pixel in 'x' is (x / 8): each byte holds 8 pixels, to find the byte divide x/8 rounding down: 58 >> 3 = 7, the pixel is on the byte 7 of the current row (stride)
2) Find the bit on the current byte (x % 8): Do x & 0x7 to get only the 3 leftmost bits (x % 8)
Example:
x = 58
// x / 8 - the pixel is on byte 7
byte = 58 >> 3 = 58 / 8 = 7
// x % 8 - byte 7, bit 2
bitPosition = 58 & 0x7 = 2
// the pixels are read from left to right, so we start with 0x80 and then shift right.
mask = 0x80 >> bitPosition = 1000 0000b >> 2 = 0010 0000b
First of all, if you need to read a single pixel in one operation, then GetPixel will be equivalent in performance. The expensive operation is locking the bits, ie. you should hold on to the BitmapData for doing all the reading you need, and only close it at the end - but remember to close it !
There seems to be some confusion about your pixel format, but let's assume it is correct 1bpp. Then each pixel will occupy one bit, and there will be data for 8 pixels in a byte. Therefore, your indexing calculation is incorrect. The location of the byte would be in x/8, then you need to take bit x%8.
I use this code to reduce the depth of an image:
public void ApplyDecreaseColourDepth(int offset)
{
int A, R, G, B;
Color pixelColor;
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
A = pixelColor.A;
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);
if (G < 0)
{
G = 0;
}
B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);
if (B < 0)
{
B = 0;
}
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
first question is: the offset that I give the function is not the depth, is that right?
the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.
This is the code that I use to save the modified image:
private Bitmap bitmapImage;
public void SaveImage(string path)
{
bitmapImage.Save(path);
}
You are just setting the pixel values to a lower level.
For example, is if a pixel is represented by 3 channels with 16 bits per channel, you are reducing each pixel colour value to 8-bits per channel. This will never reduce the image size as the pixels allocated have already a fixed depth of 16 bits.
Try saving the new values to a new image with maximum of 8-bit depth.
Surely you will have a reduced image in bytes but not the overall size that is, X,Y dimensions of the image will remain intact. What you are doing will reduce image quality.
Let's start by cleaning up the code a bit. The following pattern:
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
Is equivalent to this:
R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
You can thus simplify your function to this:
public void ApplyDecreaseColourDepth(int offset)
{
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
int pixelColor = bitmapImage.GetPixel(x, y);
int A = pixel.A;
int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
To answer your questions:
Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).
First i would like to ask you one simple question :)
int i = 10;
and now i = i--;
douse it effect on size of i ?
ans is No.
you are doing the same thing
Index imaged are represent in two matrix
1 for color mapping and
2 fro image mapping
you just change the value of element not deleting it
so it will not effect on size of image
You can't decrease color depth with Get/SetPixel. Those methods only change the color.
It seems you can't easily save an image to a certain pixel format, but I did find some code to change the pixel format in memory. You can try saving it, and it might work, depending what format you save to.
From this question: https://stackoverflow.com/a/2379838/785745
He gives this code to change color depth:
public static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
You can change the PixelFormat in the code to whatever you need.
A Bitmap image of a certain pixel count is always the same size, because the bitmap format does not apply compression.
If you compress the image with an algorithm (e.g. JPEG) then the 'reduced' image should be smaller.
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2))
Doesn't this always return 0?
If you want to reduce the size of your image, you can specify a different compression format when calling Image.Save().
GIF file format is probably a good candidate, since it works best with contiguous pixels of identical color (which happens more often when your color depth is low).
JPEG works great with photos, but you won't see significant results if you convert a 24-bit image into a 16-bit one and then compresses it using JPEG, because of the way the algorithm works (you're better off saving the 24-bit pictures as JPEG directly).
And as others have explained, your code won't reduce the size used by the Image object unless you actually copy the resulting data into another Bitmap object with a different PixelFormat such as Format16bppRgb555.
I've been working on a bitmap decoder, but my algorithm for processing the pixel data doesn't seem to be quite right:
public IntPtr ReadPixels(Stream fs, int offset, int width, int height, int bpp)
{
IntPtr bBits;
int pixelCount = bpp * width * height;
int Row = 0;
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
int ArraySize = RowSize * Math.Abs(height);
int Col = 0;
Byte[] BMPData = new Byte[ArraySize];
BinaryReader r = new BinaryReader(fs);
r.BaseStream.Seek(offset, SeekOrigin.Begin);
while (Row < height)
{
Byte ReadByte;
if (!(Col >= RowSize))
{
ReadByte = r.ReadByte();
BMPData[(Row * RowSize) + Col] = ReadByte;
Col += 1;
}
if (Col >= RowSize)
{
Col = 0;
Row += 1;
}
}
bBits = System.Runtime.InteropServices.Marshal.AllocHGlobal(BMPData.Length);
System.Runtime.InteropServices.Marshal.Copy(BMPData, 0, bBits, BMPData.Length);
return bBits;
}
I can process only monochrome bitmaps and on some, parts of the bitmap is processed fine. None are compressed and they are rendered upside down and flipped around. I really could do with some help on this one.
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
That isn't correct. Your RowSize variable is actually called "stride". You compute it like this:
int bytes = (width * bitsPerPixel + 7) / 8;
int stride = 4 * ((bytes + 3) / 4);
You are ignoring the stride.
Image rows can be padded to the left with additional Bytes to make their size divide by a number such as (1 = no padding, 2, 4, 8 = default for many images, 16, ...).
Also, images can be a rectangle region within a larger image, making the "padding" between lines in the smaller image even larger (since the stride is the larger image's stride). - In this case the image can also have an offset for its start point within the buffer.
Better practice is:
// Overload this method 3 time for different bit per SUB-pixel values (8, 16, or 32)
// = (byte, int, float)
// SUB-pixel != pixel (= 1 3 or 4 sub-pixels (grey or RGB or BGR or BGRA or RGBA or ARGB or ABGR)
unsafe
{
byte[] buffer = image.Buffer;
int stride = image.buffer.Length / image.PixelHeight;
// or int stride = image.LineSize; (or something like that)
fixed (float* regionStart = (float*)(void*)buffer) // or byte* or int* depending on datatype
{
for (int y = 0; y < height; y++) // height in pixels
{
// float* and float or byte* and byte or int* and int
float* currentPos
= regionStart + offset / SizeOf(float) + stride / SizeOf(float) * y;
for (int x = 0; x < width; x++) // width in pixels
{
for (int chan = 0; chan < channel; chan++) // 1, 3 or 4 channels
{
// DO NOT USE DECIMAL - you want accurate image values
// with best performance - primative types
// not a .NET complex type used for nice looking values for users e.g. 12.34
// instead use actual sub pixel type (float/int/byte) or double instead!
var currentValue = value;
currentPos++;
}
}
}
}
}
I find something I don't understand:
decimal value = ((bpp*width)/32)/4;
int RowSize = (int)Math.Ceiling(value);
RowSize, in my opinion, should be (bpp*width) / 8 + (bpp%8==0?0:1)