I'm using Evil-DICOM to construct a 2d image in unity (i.e texture2d). The output pixel values are wrong compared to what I got from other DICOM viewers. I'm new to DICOM development and couldn't figure out what I did wrong. I've been stuck on this for weeks. Any help is appreciated.
I'm using this formula from:
https://www.dabsoft.ch/dicom/3/C.11.2.1.2/
I also read this answer from:
How to Display DICOM images using EvilDICOM in c#?
Known information about the DICOM file I'm using:
Bits Allocated:16
Bits Stored:16
High Bit:15
Rows, Columns:512
Pixel Representation:0 (i.e uncompressed)
Window Center:40
Window Width:350
Rescale Intercept:-1024
Rescale Slope:1
//Convert pixel data to 8 bit grayscale
for (int i = 0; i < pixelData.Count; i += 2)
{
//original data - 16 bits unsigned
ushort pixel = (ushort)(pixelData[i] * 0xFF + pixelData[i + 1]);
double valgray = pixel;
valgray = slope * valgray + intercept; //modality lut
if (valgray <= level - 0.5 - (window - 1)/2)
{
valgray = 0;
}
else if (valgray > level - 0.5 + (window - 1)/2)
{
valgray = 255;
}
else
{
valgray = ((valgray - (level - 0.5)) / (window - 1) + 0.5);
}
//Assign valgray to RGBA
colors[i / 2].r = (byte)(valgray);
colors[i / 2].g = (byte)(valgray);
colors[i / 2].b = (byte)(valgray);
colors[i / 2].a = 0xFF //Alpha = max
}
The left is my output, the right is output from other DICOM viewer
https://drive.google.com/file/d/1IjL48_iZDXAVi4_gzG6fLN3A2td2rwfS/view?usp=sharing
I got the order of bytes in pixeldata inverted. The pixel value should be:
ushort pixel = (ushort)(pixelData[i + 1] * 256 + pixelData[i]);
Related
I need to extract R,G,B values from the image and store in 3 different arrays with it's equivalent co-ordinates values. For example: In 500*500 Image, R must store only red color information of each pixel and it's equivalent co-ordinate values i.e., [0,0] to its maximum co-ordinate [499,499]. After extracting all the R values into array, I need to retrieve the R value of particular pixel using the co-ordinate as a reference by mouse click in the Image, And need to follow same for G,B values. Can anyone help me doing this using C#? In WPF application.
if your image is a System.Drawing.Bitmap:
BitmapData data = image.LockBits(new Rectangle(startX, startY, w, h), System.Drawing.Imaging.ImageLockMode.ReadOnly, PixelFormat);
try
{
byte[] pixelData = new Byte[data.Stride];
for (int scanline = 0; scanline < data.Height; scanline++)
{
Marshal.Copy(data.Scan0 + (scanline * data.Stride), pixelData, 0, data.Stride);
for (int pixeloffset = 0; pixeloffset < data.Width; pixeloffset++)
{
// PixelFormat.Format32bppRgb means the data is stored
// in memory as BGR. We want RGB, so we must do some
// bit-shuffling.
rgbArray[offset + (scanline * scansize) + pixeloffset] =
(pixelData[pixeloffset * PixelWidth + 2] << 16) + // R
(pixelData[pixeloffset * PixelWidth + 1] << 8) + // G
pixelData[pixeloffset * PixelWidth]; // B
}
}
}
finally
{
image.UnlockBits(data);
}
I'm currently undertaking a university project that involves object detection and recognition with a Kinect. Now I'm using the MapDethFrameToColorSpace method for coordinating the depth to rgb. I believe the issue is to with this loop here
for (int i = 0; i < _depthData.Length; ++i)
{
ColorSpacePoint newpoint = cPoints[i];
if (!float.IsNegativeInfinity(newpoint.X) && !float.IsNegativeInfinity(newpoint.Y))
int colorX = (int)Math.Floor(newpoint.X + 0.5);
int colorY = (int)Math.Floor(newpoint.Y + 0.5);
if ((colorX >= 0) && (colorX < colorFrameDescription.Width) && (colorY >= 0) && (colorY < colorFrameDescription.Height))
{
int j = ((colorFrameDescription.Width * colorY) + colorX) * bytesPerPixel;
int newdepthpixel = i * 4;
displaypixel[newdepthpixel] = colorpixels[(j)]; //B
displaypixel[newdepthpixel + 1] = colorpixels[(j + 1)]; //G
displaypixel[newdepthpixel + 2] = colorpixels[(j + 1)]; //R
displaypixel[newdepthpixel + 3] = 255; //A*/
}
It appears that the indexing is not correct or there are pixels/depth values missing because the output appears to be multiples of the same image but small and with a limited x index.
http://postimg.org/image/tecnvp1nx/
Let me guess: Your output image (displaypixel) is 1920x1080 pixels big? (Though from the link you posted, it seems to be 1829×948?)
That's your problem. MapDethFrameToColorSpace returns the corresponding position in the color image for each depth pixels. That means, you get 512x424 values. Putting those into a 1920x1080 image means only about 10% of the image is filled, and the part that's filled will be jumbled.
If you make your output image 512x424 pixels big instead, it should give you an image like the second on in this article.
Or you could keep your output image at 1920x1080, but instead of putting one pixel after the other, you'd also calculate the position where to put the pixel. So instead doing
int newdepthpixel = i * 4;
you'd need to do
int newdepthpixel = ((colorFrameDescription.Width * colorY) + colorX) * 4;
That would give you a 1920x1080 image, but with only 512x424 pixels filled, with lots of space in between.
I use this code to reduce the depth of an image:
public void ApplyDecreaseColourDepth(int offset)
{
int A, R, G, B;
Color pixelColor;
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
A = pixelColor.A;
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);
if (G < 0)
{
G = 0;
}
B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);
if (B < 0)
{
B = 0;
}
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
first question is: the offset that I give the function is not the depth, is that right?
the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.
This is the code that I use to save the modified image:
private Bitmap bitmapImage;
public void SaveImage(string path)
{
bitmapImage.Save(path);
}
You are just setting the pixel values to a lower level.
For example, is if a pixel is represented by 3 channels with 16 bits per channel, you are reducing each pixel colour value to 8-bits per channel. This will never reduce the image size as the pixels allocated have already a fixed depth of 16 bits.
Try saving the new values to a new image with maximum of 8-bit depth.
Surely you will have a reduced image in bytes but not the overall size that is, X,Y dimensions of the image will remain intact. What you are doing will reduce image quality.
Let's start by cleaning up the code a bit. The following pattern:
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
Is equivalent to this:
R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
You can thus simplify your function to this:
public void ApplyDecreaseColourDepth(int offset)
{
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
int pixelColor = bitmapImage.GetPixel(x, y);
int A = pixel.A;
int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
To answer your questions:
Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).
First i would like to ask you one simple question :)
int i = 10;
and now i = i--;
douse it effect on size of i ?
ans is No.
you are doing the same thing
Index imaged are represent in two matrix
1 for color mapping and
2 fro image mapping
you just change the value of element not deleting it
so it will not effect on size of image
You can't decrease color depth with Get/SetPixel. Those methods only change the color.
It seems you can't easily save an image to a certain pixel format, but I did find some code to change the pixel format in memory. You can try saving it, and it might work, depending what format you save to.
From this question: https://stackoverflow.com/a/2379838/785745
He gives this code to change color depth:
public static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
You can change the PixelFormat in the code to whatever you need.
A Bitmap image of a certain pixel count is always the same size, because the bitmap format does not apply compression.
If you compress the image with an algorithm (e.g. JPEG) then the 'reduced' image should be smaller.
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2))
Doesn't this always return 0?
If you want to reduce the size of your image, you can specify a different compression format when calling Image.Save().
GIF file format is probably a good candidate, since it works best with contiguous pixels of identical color (which happens more often when your color depth is low).
JPEG works great with photos, but you won't see significant results if you convert a 24-bit image into a 16-bit one and then compresses it using JPEG, because of the way the algorithm works (you're better off saving the 24-bit pictures as JPEG directly).
And as others have explained, your code won't reduce the size used by the Image object unless you actually copy the resulting data into another Bitmap object with a different PixelFormat such as Format16bppRgb555.
I'm updating a plugin for Paint.net which i made some months ago, it's called Simulate Color Depth and it reduces the number of colors in the image to the chosen BPP and for a long time it have had dithering included but NEVER ordered dithering and i thought it would be a nice addition to have that in so i started to search on the internet for something useful, i ended up on this wiki page here http://en.wikipedia.org/wiki/Ordered_dithering, and tried to do as written in the pseudo code
for (int y = 0; x < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
Color color = image.GetPixel(x, y);
color.R = color.R + bayer8x8[x % 8, y % 8];
color.G = color.G + bayer8x8[x % 8, y % 8];
color.B = color.B + bayer8x8[x % 8, y % 8];
image.SetPixel(x, y, GetClosestColor(color, bitdepth);
}
}
but the result is way too bright so i decided to check the wiki page again and then i see that there's a "1/65" to the right of the threshold map which got me thinking of both error diffusing (yes i know, weird huh?) and dividing the value i get from bayer8x8[x % 8, y % 8] with 65 and then multiply the value with the color channels, but either the results were messy or else still too bright (as i remember it) but the results were nothing like i have seen elsewhere, either too bright, too high contrast or too messy and i haven't found anything really useful searching through the internet, so do anyone know how i can get this bayer dithering working properly?
Thanks in advance, Cookies
I don't think there's anything wrong with your original algorithm (from Wikipedia). The brightness disparity is probably an artifact of monitor gamma. Check Joel Yliluoma's Positional Dithering Algorithm, the appendix about gamma correction from this article about a dithering algorithm invented by Joel Yliluoma (http://bisqwit.iki.fi/story/howto/dither/jy/#Appendix%201GammaCorrection) to see an explanation of the effect (NB: page is quite graphics-heavy).
Incidentally, perhaps the (apparently public-domain) algorithm detailed in that article may be the solution to your problem...
Try this:
color.R = color.R + bayer8x8[x % 8, y % 8] * GAP / 65;
Here GAP should be the distance between the two nearest color thresholds. This depends on the bits per pixel.
For example, if you are converting the image to use 4 bits for the red component of each pixel, there are 16 levels of red total. They are: R=0, R=17, R=34, ... R=255. So GAP would be 17.
Found a solution, levels is the amount of colors the destination images should have and d is the divisor (this is normalized from my code (which uses paint.net classes) to simple bitmap editting with GetPixel and SetPixel)
private void ProcessDither(int levels, int d, Bitmap image)
{
levels -= 1;
double scale = (1.0 / 255d);
int t, l;
for ( int y = rect.Top; y < rect.Bottom; y++ )
{
for ( int x = rect.Left; x < rect.Right; x++)
{
Color cp = image.GetPixel(x, y);
int threshold = matrix[y % rows][x % cols];
t = (int)(scale * cp.R * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.R = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.G * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.G = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.B * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.B = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
image.SetPixel(x, y, cp);
}
}
}
private byte Clamp(int val)
{
return (byte)(val < 0 ? 0 : val > 255 ? 255 : val);
}
I'm trying to get an bitmap created from raw data to show in WPF, by using an Image and a BitmapSource:
Int32[] data = new Int32[RenderHeight * RenderWidth];
for (Int32 i = 0; i < RenderHeight; i++)
{
for (Int32 j = 0; j < RenderWidth; j++)
{
Int32 index = j + (i * RenderHeight);
if (i + j % 2 == 0)
data[index] = 0xFF0000;
else
data[index] = 0x00FF00;
}
}
BitmapSource source = BitmapSource.Create(RenderWidth, RenderHeight, 96.0, 96.0, PixelFormats.Bgr32, null, data, 0);
RenderImage.Source = source;
However the call to BitmapSource.Create throws an ArgumentException, saying "Value does not fall within the expected range". Is this not the way to do this? Am I not making that call properly?
Your stride is incorrect. Stride is the number of bytes allocated for one scanline of the
bitmap. Thus, use the following:
int stride = ((RenderWidth * 32 + 31) & ~31) / 8;
and replace the last parameter (currently 0) with stride as defined above.
Here is an explanation for the mysterious stride formula:
Fact: Scanlines must be aligned on 32-bit boundaries (reference).
The naive formula for the number of bytes per scanline would be:
(width * bpp) / 8
But this might not give us a bitmap aligned on a 32-bit boundary and (width * bpp) might not even have been divisible by 8.
So, what we do is we force our bitmap to have at least 32 bits in a row (we assume that width > 0):
width * bpp + 31
and then we say that we don't care about the low-order bits (bits 0--4) because we are trying to align on 32-bit boundaries:
(width * bpp + 31) & ~31
and then divide by 8 to get back to bytes:
((width * bpp + 31) & ~31) / 8
The padding can be computed by
int padding = stride - (((width * bpp) + 7) / 8)
The naive formula would be
stride - ((width * bpp) / 8)
But width * bpp might not align on a byte boundary and when it doesn't this formula would over count the padding by a byte. (Think of a 1 pixel wide bitmap using 1 bpp. The stride is 4 and the naive formula would say that the padding is 4 but in reality it is 3.) So we add a little bit to cover the case that width * bpp is not a byte boundary and then we get the correct formula given above.