C# how to show image in picturebox - c#

I'm trying to display dicom image using openDicom.net. What should i correct here?
openDicom.Image.PixelData obraz = new openDicom.Image.PixelData(file.DataSet);
// System.Drawing.Bitmap obrazek = (Bitmap)Bitmap.FromFile(element);
pictureBox1.Image = obraz;
pictureBox1.Show();

PixelData is not an image. PixelData is raw image information. In my experience, most DICOM files will be using jpeg2000 images. In order to convert them to something usable by a PictureBox, you'll need to convert it to an Image. For raw monochrome types, you can make it into a System.Drawing.Bitmap using the following conversion:
openDicom.Image.PixelData obraz = new openDicom.Image.PixelData(file.DataSet);
Bitmap img = new System.Drawing.Bitmap(obraz.Columns, obraz.Rows, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
int resampleval = (int)Math.Pow(2, (obraz.BitsAllocated - obraz.BitsStored));
int pxCount = 0;
int temp = 0;
try
{
unsafe
{
BitmapData bd = img.LockBits(new Rectangle(0, 0, obraz.Columns, obraz.Rows), ImageLockMode.WriteOnly, img.PixelFormat);
for (int r = 0; r < bd.Height; r++)
{
byte* row = (byte*)bd.Scan0 + (r * bd.Stride);
for (int c = 0; c < bd.Width; c++)
{
temp = PixelData16[pxCount] / resampleval;
while (temp > 255)
temp = temp / resampleval;
row[(c * 3)] = (byte)temp;
row[(c * 3) + 1] = (byte)temp;
row[(c * 3) + 2] = (byte)temp;
pxCount++;
}
}
img.UnlockBits(bd);
}
}
catch
{
img = new Bitmap(10, 10);
}
pictureBox1.Image = img;
pictureBox1.Show();
For other image types, you'll need to do a similar conversion with the appropriate values. This conversion is strictly for monochrome types, and only after they have been converted from jpeg2000 to jpeg. Performing this operation on a jpeg2000 image will give you exactly half of the image filled with static and the other half completely empty.

Related

How to compare 2 images and compare them

I am making a video recorder, The app works by taking a lot of screenshots and putting them together into one video. Also, I am trying to make something like screen motion detection. I need the app to take screenshots only when a difference in the screen is detected. I was thinking about how to do that, and I believe I need to make it still take screenshots while comparing them to the previous one. Is there a way to do that?
The code:
//Record video:
public void RecordVideo()
{
//Keep track of time:
watch.Start();
using (Bitmap bitmap = new Bitmap(bounds.Width, bounds.Height))
{
using (Graphics g = Graphics.FromImage(bitmap))
{
//Add screen to bitmap:
g.CopyFromScreen(new Point(bounds.Left, bounds.Top), Point.Empty, bounds.Size);
}
//Save screenshot:
string name = tempPath + "//screenshot-" + fileCount + ".png";
bitmap.Save(name, ImageFormat.Png);
inputImageSequence.Add(name);
fileCount++;
//Dispose of bitmap:
bitmap.Dispose();
}
}
I have something that may be useful for you. The idea is save only the differences between the images and, with that, recreate later all images from starting image and saved changes.
To do this, you only need make a XOR operation in the image bytes. This method allow you get the difference (the array parameter) between two images:
protected void ApplyXor(Bitmap img1, Bitmap img2, byte[] array)
{
const ImageLockMode rw = ImageLockMode.ReadWrite;
const PixelFormat argb = PixelFormat.Format32bppArgb;
var locked1 = img1.LockBits(new Rectangle(0, 0, img1.Width, img1.Height), rw, argb);
var locked2 = img2.LockBits(new Rectangle(0, 0, img2.Width, img2.Height), rw, argb);
try
{
ApplyXor(locked2, locked1, array);
}
finally
{
img1.UnlockBits(locked1);
img2.UnlockBits(locked2);
}
}
With the previous img1 bitmap and the array returned, you can get the img2 with this method:
protected void ApplyXor(Bitmap img1, byte[] array, Bitmap img2)
{
const ImageLockMode rw = ImageLockMode.ReadWrite;
const PixelFormat argb = PixelFormat.Format32bppArgb;
var locked1 = img1.LockBits(new Rectangle(0, 0, img1.Width, img1.Height), rw, argb);
var locked2 = img2.LockBits(new Rectangle(0, 0, img2.Width, img2.Height), rw, argb);
try
{
ApplyXor(locked1, array, locked2);
}
finally
{
img1.UnlockBits(locked1);
img2.UnlockBits(locked2);
}
}
And here the other required methods:
private unsafe void ApplyXor(BitmapData img1, BitmapData img2, byte[] array)
{
byte* prev0 = (byte*)img1.Scan0.ToPointer();
byte* cur0 = (byte*)img2.Scan0.ToPointer();
int height = img1.Height;
int width = img1.Width;
int halfwidth = width / 2;
fixed (byte* target = array)
{
ulong* dst = (ulong*)target;
for (int y = 0; y < height; ++y)
{
ulong* prevRow = (ulong*)(prev0 + img1.Stride * y);
ulong* curRow = (ulong*)(cur0 + img2.Stride * y);
for (int x = 0; x < halfwidth; ++x)
{
if (curRow[x] != prevRow[x])
{
int a = 0;
}
*(dst++) = curRow[x] ^ prevRow[x];
}
}
}
}
private unsafe void ApplyXor(BitmapData img1, byte[] array, BitmapData img2)
{
byte* prev0 = (byte*)img1.Scan0.ToPointer();
byte* cur0 = (byte*)img2.Scan0.ToPointer();
int height = img1.Height;
int width = img1.Width;
int halfwidth = width / 2;
fixed (byte* target = array)
{
ulong* dst = (ulong*)target;
for (int y = 0; y < height; ++y)
{
ulong* prevRow = (ulong*)(prev0 + img1.Stride * y);
ulong* curRow = (ulong*)(cur0 + img2.Stride * y);
for (int x = 0; x < halfwidth; ++x)
{
curRow[x] = *(dst++) ^ prevRow[x];
}
}
}
}
NOTE: You must configure your project to allow unsafe.
With previous methods, you can do:
Save a img1 bitmap
Get img2 bitmap, do XOR and get the array (array2, for example)
With img3, get the XOR with img2 (array3, for example). Now, img2 isn't needed
With img4, get the XOR with img3 (array4). Now, img3 isn't needed
...
You have img1 and array2, array3, array4... and you can recreate all images:
Make XOR between img1 and array2 to get img2
Make XOR between img2 and array3 to get img3
...
If you need send video over TCP, you can send the images sending one image and the XOR arrays (the differences). Or better yet, compress the XOR arrays using K4os.Compression.LZ4.

PDFSharp - Extract FlateDecode as PNG

Goal
To properly get FlateDecode image objects from pdf out at png.
Please let me know if you see anything wrong with the below code that might be causing issues.
The code below gives me the image but it is completely distorted. See: (left = good, right = mine with code)
static void ExportAsPngImage(PdfDictionary image, string filename, ref int count)
{
int width = image.Elements.GetInteger(PdfImage.Keys.Width);
int height = image.Elements.GetInteger(PdfImage.Keys.Height);
int bitsPerComponent = image.Elements.GetInteger(PdfImage.Keys.BitsPerComponent);
var canUnfilter = image.Stream.TryUnfilter();
byte[] decoded = image.Stream.Value;
System.Drawing.Imaging.PixelFormat pixelFormat;
switch (bitsPerComponent)
{
case 1:
pixelFormat = PixelFormat.Format1bppIndexed;
break;
case 8:
pixelFormat = PixelFormat.Format8bppIndexed;
break;
case 24:
pixelFormat = PixelFormat.Format24bppRgb;
break;
default:
throw new Exception("Unknown pixel format " + bitsPerComponent);
}
Bitmap bmp = new Bitmap(width, height, pixelFormat);
var bmd = bmp.LockBits(new System.Drawing.Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, pixelFormat);
int length = (int)Math.Ceiling(Convert.ToInt32(width) * bitsPerComponent / 8.0);
for (int j = 0; j < height; j++)
{
int offset = j * length;
int scanOffset = j * bmd.Stride;
Marshal.Copy(decoded, offset, new IntPtr(bmd.Scan0.ToInt32() + scanOffset), length);
}
bmp.UnlockBits(bmd);
using(var fs = new FileStream(filename + "_" + count + ".png",FileMode.Create, FileAccess.Write))
bmp.Save(fs, ImageFormat.Png);
count++;
}
I assume this is an case 8 image. If you extract the image data without the associated color palette and display the image with the default color palette, then you will get images like the one you are showing.
Images in PDF files can consist of several PDF objects: the pixel data, the color palette, an alpha mask, a bilevel mask.
Maybe for that image it is sufficient to create a grayscale palette where color x has the RGB values (x, x, x). But for a general solution extract the palette from the PDF.
here's a snippet of code which will set the palette for a /DeviceRGB colorspace:
using PdfSharp.Pdf;
using PdfSharp.Pdf.Advanced;
...
// get palette if required (pf is the pixel format, previously extracted from the imageDictionary, imageDictionary is the PdfDictionary for the image, bmp is the System.Drawing.Bitmap we're going to be dumping our image data to)
if (pf == System.Drawing.Imaging.PixelFormat.Format8bppIndexed)
{
PdfArray colArr = imageDictionary.Elements.GetArray(PdfImage.Keys.ColorSpace);
if (colArr != null && colArr.Elements.GetName(0) == "/Indexed" && colArr.Elements.GetName(1) == "/DeviceRGB")
{
System.Drawing.Imaging.ColorPalette pal = bmp.Palette; // this returns a clone, so we'll manipulate it and then set it back at the end
int palCount = colArr.Elements.GetInteger(2);
char[] palVal = colArr.Elements.GetString(3).ToCharArray();
int basePointer = 0;
for (int i = 0; i < palCount; i++)
{
pal.Entries[i] = System.Drawing.Color.FromArgb(palVal[basePointer], palVal[basePointer + 1], palVal[basePointer + 2]);
basePointer += 3;
}
bmp.Palette = pal;
}
else
{
// some other colorspace mechanism needs to be implemented
}
}

Convert ARGB to PARGB

I've been looking for a fast alternative method of SetPixel() and I have found this link : C# - Faster Alternatives to SetPixel and GetPixel for Bitmaps for Windows Forms App
So my problem is that I've an image and I want to create a copy as a DirectBitmap object but first I need to convert ARGB to PARGB so I used this code:
public static Color PremultiplyAlpha(Color pixel)
{
return Color.FromArgb(
pixel.A,
PremultiplyAlpha_Component(pixel.R, pixel.A),
PremultiplyAlpha_Component(pixel.G, pixel.A),
PremultiplyAlpha_Component(pixel.B, pixel.A));
}
private static byte PremultiplyAlpha_Component(byte source, byte alpha)
{
return (byte)((float)source * (float)alpha / (float)byte.MaxValue + 0.5f);
}
and Here's my copy code:
DirectBitmap DBMP = new DirectBitmap(img.Width, img.Height);
MyImage myImg = new MyImage(img as Bitmap);
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
Color PARGB = NativeWin32.PremultiplyAlpha(Color.FromArgb(myImg.RGB[i, j].Alpha,
myImg.RGB[i, j].R, myImg.RGB[i, j].G, myImg.RGB[i, j].B));
byte[] bitMapData = new byte[4];
bitMapData[3] = (byte)PARGB.A;
bitMapData[2] = (byte)PARGB.R;
bitMapData[1] = (byte)PARGB.G;
bitMapData[0] = (byte)PARGB.B;
DBMP.Bits[(i * img.Height) + j] = BitConverter.ToInt32(bitMapData, 0);
}
}
MyImage : a class containing a Bitmap object along with an array of struct RGB storing the colors of each pixel
However, this code gives me a messed up image. what am I doing wrong?
Bitmap data is organized horizontal line after horizontal line. Therefore, your last line should be:
DBMP.Bits[j * img.Width + i] = BitConverter.ToInt32(bitMapData, 0);

Scaling an Image Byte Array via Nearest Neighbor

I am trying to scale an image by using a byte array. My plan was to use the nearest neighbor to find the pixel data for the new byte array but I am having some issues transforming the source data of my image. srcImage is of type Image and I am able to successfully convert it to a byte array and convert it back into an image, I am having an issue with scaling that image using its byte array. I use a MemoryStream (in my byteArrayToImage method) to convert the trgData back to an Image but I receive an ArguementException telling me that the 'new MemoryStream(byteArray)' parameter is not valid.
Does anyone happen to know what might be the issue or how to correctly scale a byte array from an Image? For right now, we can assume that I am only scaling and saving png files but I would like to eventually expand this to other image formats.
Here is my code for the copying and scaling of the image data:
byte[] srcData = ImageToByteArraybyImageConverter(srcImage);
// data transformations here
//
float srcRatio = srcImage.Width / srcImage.Height;
int bitsPerPixel = ((int)srcImage.PixelFormat & 0xff00) >> 8;
int bytesPerPixel = (bitsPerPixel + 7) / 8;
int srcStride = 4 * ((srcImage.Width * bytesPerPixel + 3) / 4);
// find max scale value
int scale = 3; // NOTE: temporary
// create new byte array to store new image
int width = srcImage.Width * scale;
int height = srcImage.Height * scale;
int stride = width;
byte[] trgData = new byte[height * stride];
// update the progress bar
progressBar1.Value = 10;
int i = -1; // index for src data
// copy pixel data
for (int n = 0; n < trgData.Length; n++)
{
if (n % stride == 0)
{
i++;
}
trgData[n] = srcData[i];
}
progressBar1.Value = 60;
// convert the pixel data to image
Image newImage = byteArrayToImage(trgData);
progressBar1.Value = 70;
if (newImage != null)
{
// save the image to disk
newImage.Save(newFileName);
progressBar1.Value = 100;
}
Let me know if you have any more questions, and thank you!
Edited: Here are the methods that load and save the byte array
private byte[] ImageToByteArraybyImageConverter(System.Drawing.Image image)
{
ImageConverter imageConverter = new ImageConverter();
byte[] imageByte = (byte[])imageConverter.ConvertTo(image, typeof(byte[]));
return imageByte;
}
private Image byteArrayToImage(byte[] byteArrayIn)
{
Image returnImage = null;
using (MemoryStream ms = new MemoryStream(byteArrayIn))
{
returnImage = Image.FromStream(ms);
}
return returnImage;
}
You are doing a very strange thing here:
int i = -1; // index for src data
// copy pixel data
for (int n = 0; n < trgData.Length; n++)
{
if (n % stride == 0)
{
i++;
}
trgData[n] = srcData[i];
}
After every width iterations (because stride == width) you increment the original index. That is, the whole first line of the new image will be filled with first byte of the original image, the second line - with the second byte etc.
Try this instead:
int targetIdx = 0;
for (int i = 0; i < height; ++i)
{
int iUnscaled = i / scale;
for (int j = 0; j < width; ++j) {
int jUnscaled = j / scale;
trgData[targetIdx++] = srcData[iUnscaled * origWidth + jUnscaled];
}
}
Note that it assumes BPP=1, that is it copies evey byte scale times horizontally and vertically. It's not hard to modify it for BPP more than 1.
Here is a demo illustrating both algorithms (comment the first line to see how your algo behaves)

C# Creating PixelFormat.Format32bppArgb skewing image

I am trying to combine 3 grayscale bitmaps into one color bitmap. All three grayscale images are the same size (this is based off of data from the Hubble). My logic is:
Load "blue" image and convert to PixelFormat.Format24bppRgb. Based off of that create a new byte array that is 4 times as large as the blue data array length/3 (so it will be one byte for blue, one byte for green, one byte for red, one byte for alpha per pixel since my system is little endian). Populate the "blue" bytes of the array from the "blue" bytes of the blue image (and in this first loop set the alpha byte to 255). I then load the green and red bitmaps, convert them to PixelFormat.Format24bppRgb, and pull the g/r value and add it to the correct place in the data array. The final data array then has the bgra bytes set correctly from what I can tell.
When I have the data array populated, I have used it to:
Create a PixelFormats.Bgra32 BitmapSource then convert that to a Bitmap.
Create a PixelFormat.Format32bppArgb Bitmap using the Bitmap constructor (width, height, stride, PixelForma, IntPtr)
Create a PixelFormat.Format32bppArgb Bitmap using pointers
All three ways of creating a return bitmap result in the image being "skewed" (sorry, I don't know of a better word).
The actual output (of all three ways of generating the final bitmap) is: Actual output
The desired output is something like (this was done in photoshop so it is slightly different): Desired output
The three file names (_blueFileName, _greenFileName, _redFileName) are set in the constructor and I check to make sure the files exist before creating the class. I can post that code if anyone wants it.
Can anyone tell me what I am doing wrong? I am guessing that is is due to the stride or something like that?
Note: I can't post the links to the images I am using as input as I don't have 10 reputation points. Maybe I could send the links via email or something if someone wants them as well.
Here is my code (with some stuff commented out, the comments describe what happens if each commented out block is used instead):
public Bitmap Merge()
{
// Load original "blue" bitmap.
Bitmap tblueBitmap = (Bitmap)Image.FromFile(_blueFileName);
int width = tblueBitmap.Width;
int height = tblueBitmap.Height;
// Convert to 24 bpp rgb (which is bgr on little endian machines)
Bitmap blueBitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
using (Graphics gr = Graphics.FromImage(blueBitmap))
{
gr.DrawImage(tblueBitmap, 0, 0, width, height);
}
tblueBitmap.Dispose();
// Lock and copy to byte array.
BitmapData blueData = blueBitmap.LockBits(new Rectangle(0, 0, blueBitmap.Width, blueBitmap.Height), ImageLockMode.ReadOnly,
blueBitmap.PixelFormat);
int numbBytes = blueData.Stride*blueBitmap.Height;
byte[] blueBytes = new byte[numbBytes];
Marshal.Copy(blueData.Scan0, blueBytes, 0, numbBytes);
blueBitmap.UnlockBits(blueData);
blueData = null;
blueBitmap.Dispose();
int mult = 4;
byte[] data = new byte[(numbBytes/3)*mult];
int count = 0;
// Copy every third byte starting at 0 to the final data array (data).
for (int i = 0; i < data.Length / mult; i++)
{
// Check for overflow
if (blueBytes.Length <= count*3 + 2)
{
continue;
}
// First pass, set Alpha channel.
data[i * mult + 3] = 255;
// Set blue byte.
data[i*mult] = blueBytes[count*3];
count++;
}
// Cleanup.
blueBytes = null;
int generation = GC.GetGeneration(this);
GC.Collect(generation);
Bitmap tgreenBitmap = (Bitmap)Image.FromFile(_greenFileName);
Bitmap greenBitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
using (Graphics gr = Graphics.FromImage(greenBitmap))
{
gr.DrawImage(tgreenBitmap, 0, 0, width, height);
}
tgreenBitmap.Dispose();
BitmapData greenData = greenBitmap.LockBits(new Rectangle(0, 0, greenBitmap.Width, greenBitmap.Height), ImageLockMode.ReadOnly,
greenBitmap.PixelFormat);
numbBytes = greenData.Stride * greenBitmap.Height;
byte[] greenBytes = new byte[numbBytes];
Marshal.Copy(greenData.Scan0, greenBytes, 0, numbBytes);
greenBitmap.UnlockBits(greenData);
greenData = null;
greenBitmap.Dispose();
count = 0;
for (int i = 0; i < data.Length / mult; i++)
{
if (greenBytes.Length <= count * 3 + 1)
{
continue;
}
// Set green byte
data[i * mult + 1] = greenBytes[count * 3 + 1];
count++;
}
greenBytes = null;
generation = GC.GetGeneration(this);
GC.Collect(generation);
Bitmap tredBitmap = (Bitmap)Image.FromFile(_redFileName);
Bitmap redBitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
using (Graphics gr = Graphics.FromImage(redBitmap))
{
gr.DrawImage(tredBitmap, 0, 0, width, height);
}
tredBitmap.Dispose();
BitmapData redData = redBitmap.LockBits(new Rectangle(0, 0, redBitmap.Width, redBitmap.Height), ImageLockMode.ReadOnly,
redBitmap.PixelFormat);
numbBytes = redData.Stride * redBitmap.Height;
byte[] redBytes = new byte[numbBytes];
Marshal.Copy(redData.Scan0, redBytes, 0, numbBytes);
redBitmap.UnlockBits(redData);
redData = null;
redBitmap.Dispose();
count = 0;
for (int i = 0; i < data.Length / mult; i++)
{
if (redBytes.Length <= count * 3+2)
{
count++;
continue;
}
// set red byte
data[i * mult + 2] = redBytes[count * 3 + 2];
count++;
}
redBytes = null;
generation = GC.GetGeneration(this);
GC.Collect(generation);
int stride = (width*32 + 7)/8;
var bi = BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgra32, null, data, stride);
// uncomment out below to see what a bitmap source to bitmap does. So far, it is exactly the same as
// the uncommented out lines below.
// ---------------------------------------------------------------------------------------------------
//return BitmapImage2Bitmap(bi);
unsafe
{
fixed (byte* p = data)
{
IntPtr ptr = (IntPtr)p;
// Trying the commented out lines returns the same bitmap as the uncommented out lines.
// ------------------------------------------------------------------------------------
byte* p2 = (byte*)ptr;
Bitmap retBitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
BitmapData fData = retBitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite,
PixelFormat.Format32bppArgb);
unsafe
{
for (int i = 0; i < fData.Height; i++)
{
byte* imgPtr = (byte*)(fData.Scan0 + (fData.Stride * i));
for (int x = 0; x < fData.Width; x++)
{
for (int ii = 0; ii < 4; ii++)
{
*imgPtr++ = *p2++;
}
//*imgPtr++ = 255;
}
}
}
retBitmap.UnlockBits(fData);
//Bitmap retBitmap = new Bitmap(width, height, GetStride(width, PixelFormat.Format32bppArgb),
// PixelFormat.Format32bppArgb, ptr);
return retBitmap;
}
}
}
private Bitmap BitmapImage2Bitmap(BitmapSource bitmapSrc)
{
using (MemoryStream outStream = new MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(bitmapSrc));
enc.Save(outStream);
Bitmap bitmap = new Bitmap(outStream);
return new Bitmap(bitmap);
}
}
private int GetStride(int width, PixelFormat pxFormat)
{
int bitsPerPixel = ((int)pxFormat >> 8) & 0xFF;
int validBitsPerLine = width * bitsPerPixel;
int stride = ((validBitsPerLine + 31) / 32) * 4;
return stride;
}
You are missing the gap between the lines. The Stride value is not the amount of data in a line, it's the distance between the start of one line to the next. There may be a gap at the end of each line to align the next line on an even address boundary.
The Stride value can even be negative, then the image is stored upside down in memory. To get the data without the gaps and to handle all cases you need to copy one line at a time:
BitmapData blueData = blueBitmap.LockBits(new Rectangle(0, 0, blueBitmap.Width, blueBitmap.Height), ImageLockMode.ReadOnly, blueBitmap.PixelFormat);
int lineBytes = blueBitmap.Width * 3;
int numbBytes = lineBytes * blueBitmap.Height;
byte[] blueBytes = new byte[numbBytes];
for (int y = 0; y < blueBitmap.Height; y++) {
Marshal.Copy(blueData.Scan0 + y * blueData.Stride, blueBytes, y * lineBytes, lineBytes);
}
blueBitmap.UnlockBits(blueData);
blueBitmap.Dispose();

Categories

Resources