Compress animated gif image size using c# - c#

I wanted to create animated gif image from several images using c#, so i have used below github solution to do so.
https://github.com/DataDink/Bumpkit
I am using below code to do it
using (var gif = File.OpenWrite(#"C:\IMG_TEST.gif"))
using (var encoder = new GifEncoder(gif))
for (int i = 0, count = imageFilePaths.Length; i < count; i++)
{
Image image = Image.FromFile(imageFilePaths[i]);
encoder.AddFrame(image,0,0);
}
it is working like a charm, but it is creating gif of size 45 MB. If i check my actual image size , then it is only 11MB with total 47 images. but somehow gif is generating with big size.
Now i want to compress the size of gif image which is being created using c#.
So is there any way i can compress the size of gif image?

I know this is an old question, but figured I'd share this solution.
I ran into the same issue and found that each frame should contain only the differences in pixels from the previous frame. I thought that the encoder would do the image diff for me, but apparently it doesn't.
So before adding each frame, I compare it to the previous frame using this method I wrote. I then add the resulting image that contains only the changed pixels.
Here's where I'm using it: https://github.com/Jay-Rad/CleanShot/blob/master/CleanShot/Classes/GIFRecorder.cs
public class ImageDiff
{
public static Bitmap GetDifference(Bitmap bitmap1, Bitmap bitmap2)
{
if (bitmap1.Height != bitmap2.Height || bitmap1.Width != bitmap2.Width)
{
throw new Exception("Bitmaps are not of equal dimensions.");
}
if (!Bitmap.IsAlphaPixelFormat(bitmap1.PixelFormat) || !Bitmap.IsAlphaPixelFormat(bitmap2.PixelFormat) ||
!Bitmap.IsCanonicalPixelFormat(bitmap1.PixelFormat) || !Bitmap.IsCanonicalPixelFormat(bitmap2.PixelFormat))
{
throw new Exception("Bitmaps must be 32 bits per pixel and contain alpha channel.");
}
var newImage = new Bitmap(bitmap1.Width, bitmap1.Height);
var bd1 = bitmap1.LockBits(new System.Drawing.Rectangle(0, 0, bitmap1.Width, bitmap1.Height), ImageLockMode.ReadOnly, bitmap1.PixelFormat);
var bd2 = bitmap2.LockBits(new System.Drawing.Rectangle(0, 0, bitmap2.Width, bitmap2.Height), ImageLockMode.ReadOnly, bitmap2.PixelFormat);
// Get the address of the first line.
IntPtr ptr1 = bd1.Scan0;
IntPtr ptr2 = bd2.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = Math.Abs(bd1.Stride) * bitmap1.Height;
byte[] rgbValues1 = new byte[bytes];
byte[] rgbValues2 = new byte[bytes];
// Copy the RGBA values into the array.
Marshal.Copy(ptr1, rgbValues1, 0, bytes);
Marshal.Copy(ptr2, rgbValues2, 0, bytes);
// Check RGBA value for each pixel.
for (int counter = 0; counter < rgbValues1.Length - 4; counter += 4)
{
if (rgbValues1[counter] != rgbValues2[counter] ||
rgbValues1[counter + 1] != rgbValues2[counter + 1] ||
rgbValues1[counter + 2] != rgbValues2[counter + 2] ||
rgbValues1[counter + 3] != rgbValues2[counter + 3])
{
// Change was found.
var pixel = counter / 4;
var row = (int)Math.Floor((double)pixel / bd1.Width);
var column = pixel % bd1.Width;
newImage.SetPixel(column, row, Color.FromArgb(rgbValues1[counter + 3], rgbValues1[counter + 2], rgbValues1[counter + 1], rgbValues1[counter]));
}
}
bitmap1.UnlockBits(bd1);
bitmap2.UnlockBits(bd2);
return newImage;
}
}

Related

Capture image from screen and get colors

I'm making a program which can capture a small area on screen and will run something if there is any color on image that match the target colors. My program run as the following Sequence:
Get image from a specific area from screen
Save to a folder
using CountPixel to detect any target_color
However, after I click the button5 twice times (not double click), it through an exception at below line :
b.Save(#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg);
Exception :
An unhandled exception of type
'System.Runtime.InteropServices.ExternalException' occurred in
System.Drawing.dll
Additional information: A generic error occurred in GDI+
My questions are :
How can i fix this exception ?
I want to use another method instead of CountPixel() to improve performance, because I just need to detect only one target color to rise event.
Step 2 is troublesome. I wonder if i can skip it and use the other way to call: (#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg) , because using this long string isn't comfortable and result error when im trying to use with GetPixel,... or add it into some "value example" code on internet for improvement purpose.
private int CountPixels(Bitmap bm, Color target_color)
{
// Loop through the pixels.
int matches = 0;
for (int y = 0; y < bm.Height; y++)
{
for (int x = 0; x < bm.Width; x++)
{
if (bm.GetPixel(x, y) == target_color) matches++;
}
}
return matches;
}
private Bitmap CapturedImage(int x, int y)
{
Bitmap b = new Bitmap(XX, YY);
Graphics g = Graphics.FromImage(b);
g.CopyFromScreen(x, y, 0, 0, new Size(XX, YY));
b.Save(#"C:\Applications\CaptureImage000.jpg", ImageFormat.Jpeg);
/* Run 3 line below will lead to question 1 - through exception
Bitmap bm = new Bitmap(#"C:\Applications\CaptureImage000.jpg");
int black_pixels = CountPixels(b, Color.FromArgb(255, 0, 0, 0));
textBox3.Text = black_pixels + " black pixels";
*/
return b;
}
private void button5_Click(object sender, EventArgs e)// Do screen cap
{
Bitmap bmp = null;
bmp = CapturedImage(X0, Y0);
}
[EDIT] Worked on this tonight with OP, made some improvements
This now accounts for endianness of the machine and correctly compares colors by converting them to integers with the Color.ToArgb() function
the below code will work, I have added comments for clarity and given you some options. I wrote the code without an IDE but I am confident it is fine.
In both cases below, just keep the handle to the bitmap, don't need to save and reopen regardless of if you need a copy.
Exception issue and improvements to CapturedImage function
option A (recommended)
Don't save the bitmap, you already have a handle, the graphics object just modified the BMP. Just leave the below code as is for this function and it will work fine without un-commenting one of the other options.
Code and other options:
private Bitmap CapturedImage(Bitmap bm, int x, int y)
{
Bitmap b = new Bitmap(XX, YY);
Graphics g = Graphics.FromImage(b);
g.CopyFromScreen(x, y, 0, 0, new Size(XX, YY));
//option B - If you DO need to keep a copy of the image use PNG and delete the old image
/*
try
{
if(System.IO.File.Exists(#"C:\Applications\CaptureImage.png"))
{
System.IO.File.Delete(#"C:\Applications\CaptureImage.png");
}
b.Save(#"C:\Applications\CaptureImage.png", ImageFormat.Png);
}
catch (System.Exception ex)
{
MessageBox.Show("There was a problem trying to save the image, is the file in open in another program?\r\nError:\r\n\r\n" + ex.Message);
}
*/
//option C - If you DO need to keep a copy of the image AND keep all copies of all images when you click the button use PNG and generate unique filename
/*
int id = 0;
while(System.IO.File.Exists(#"C:\Applications\CaptureImage" + id.ToString().PadLeft('0',4) + ".png"))
{
//increment the id until a unique file name is found
id++;
}
b.Save(#"C:\Applications\CaptureImage" + id.ToString().PadLeft('0',4) + ".png", ImageFormat.Png);
*/
int black_pixels = CountPixels(b, Color.FromArgb(255, 0, 0, 0));
textBox3.Text = black_pixels + " black pixels";
return b;
}
Now for the CountPixels function, you have 3 options but really, you have one really solid option, so I am omitting the others.
This locks the bits in the BMP, uses marshalling to copy the data into an array and scans the array for data, very, very fast, and you will likely not even need to remove the count. If you do STILL want to remove the count, just add "return 1;" right underneath where it increments the matches variable.
Speed issue and improvements to CountPixels function
private int CountPixels(Bitmap bm, Color target_color)
{
int matches = 0;
Bitmap bmp = (Bitmap)bm.Clone();
BitmapData bmpDat = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat);
int size = bmpDat.Stride * bmpDat.Height;
byte[] subPx = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpDat.Scan0, subPx, 0, size);
//change the 4 (ARGB) to a 3 (RGB) if you don't have an alpha channel, this is for 32bpp images
//ternary operator to check endianess of machine and organise pixel colors as A,R,G,B or B,G,R,A (little endian is reversed);
Color temp = BitConverter.IsLittleEndian ? Color.FromArgb(subPx[i + 2], subPx[i + 1], subPx[i]) : Color.FromArgb(subPx[i + 1], subPx[i + 2], subPx[i + 3]);
for (int i = 0; i < size; i += 4 ) //4 bytes per pixel A, R, G, B
{
if(temp.ToArgb() == target_color.ToArgb())
{
matches++;
}
}
System.Runtime.InteropServices.Marshal.Copy(subPx, 0, bmpDat.Scan0, subPx.Length);
bmp.UnlockBits(bmpDat);
return matches;
}
Finally the same function but allowing for a tolerance percent
private int CountPixels(Bitmap bm, Color target_color, float tolerancePercent)
{
int matches = 0;
Bitmap bmp = (Bitmap)bm.Clone();
BitmapData bmpDat = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat);
int size = bmpDat.Stride * bmpDat.Height;
byte[] subPx = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpDat.Scan0, subPx, 0, size);
for (int i = 0; i < size; i += 4 )
{
byte r = BitConverter.IsLittleEndian ? subPx[i+2] : subPx[i+3];
byte g = BitConverter.IsLittleEndian ? subPx[i+1] : subPx[i+2];
byte b = BitConverter.IsLittleEndian ? subPx[i] : subPx[i+1];
float distancePercent = (float)Math.Sqrt(
Math.Abs(target_color.R-r) +
Math.Abs(target_color.G-g) +
Math.Abs(target_color.B-b)
) / 7.65f;
if(distancePercent < tolerancePercent)
{
matches++;
}
}
System.Runtime.InteropServices.Marshal.Copy(subPx, 0, bmpDat.Scan0, subPx.Length);
bmp.UnlockBits(bmpDat);
return matches;
}

generic error occurred in GDI+ saving bitmap to file in a loop witin c#

I'm saving a bitmap to a file on my hard drive inside of a loop (All the jpeg files within a directory are being saved to a database). The save works fine the first pass through the loop, but then gives the subject error on the second pass. I thought perhaps the file was getting locked so I tried generating a unique file name for each pass, and I'm also using Dispose() on the bitmap after the file get saved. Any idea what is causing this error?
Here is my code:
private string fileReducedDimName = #"c:\temp\Photos\test\filePhotoRedDim";
...
foreach (string file in files)
{
int i = 0;
//if the file dimensions are big, scale the file down
Stream photoStream = File.OpenRead(file);
byte[] photoByte = new byte[photoStream.Length];
photoStream.Read(photoByte, 0, System.Convert.ToInt32(photoByte.Length));
Image image = Image.FromStream(new MemoryStream(photoByte));
Bitmap bm = ScaleImage(image);
bm.Save(fileReducedDimName + i.ToString() + ".jpg", ImageFormat.Jpeg);//error occurs here
Array.Clear(photoByte,0, photoByte.Length);
bm.Dispose();
i ++;
}
...
Thanks
Here's the scale image code: (this seems to be working ok)
protected Bitmap ScaleImage(System.Drawing.Image Image)
{
//reduce dimensions of image if appropriate
int destWidth;
int destHeight;
int sourceRes;//resolution of image
int maxDimPix;//largest dimension of image pixels
int maxDimInch;//largest dimension of image inches
Double redFactor;//factor to reduce dimensions by
if (Image.Width > Image.Height)
{
maxDimPix = Image.Width;
}
else
{
maxDimPix = Image.Height;
}
sourceRes = Convert.ToInt32(Image.HorizontalResolution);
maxDimInch = Convert.ToInt32(maxDimPix / sourceRes);
//Assign size red factor based on max dimension of image (inches)
if (maxDimInch >= 17)
{
redFactor = 0.45;
}
else if (maxDimInch < 17 && maxDimInch >= 11)
{
redFactor = 0.65;
}
else if (maxDimInch < 11 && maxDimInch >= 8)
{
redFactor = 0.85;
}
else//smaller than 8" dont reduce dimensions
{
redFactor = 1;
}
destWidth = Convert.ToInt32(Image.Width * redFactor);
destHeight = Convert.ToInt32(Image.Height * redFactor);
Bitmap bm = new Bitmap(destWidth, destHeight,
PixelFormat.Format24bppRgb);
bm.SetResolution(Image.HorizontalResolution, Image.VerticalResolution);
Graphics grPhoto = Graphics.FromImage(bm);
grPhoto.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
grPhoto.DrawImage(Image,
new Rectangle(0, 0, destWidth, destHeight),
new Rectangle(0, 0, Image.Width, Image.Height),
GraphicsUnit.Pixel);
grPhoto.Dispose();
return bm;
}
If I'm reading the code right, your i variable is zero every time through the loop.
It is hard to diagnose exactly what is wrong, I would recommend that you use using statements to ensure that your instances are getting disposed of properly, but it looks like they are.
I originally thought it might be an issue with the ScaleImage. So I tried a different resize function (C# GDI+ Image Resize Function) and it worked, but i is always set to zero at beginning of each loop. Once you move i's initialization outside of the loop your scale method works as well.
private void MethodName()
{
string fileReducedDimName = #"c:\pics";
int i = 0;
foreach (string file in Directory.GetFiles(fileReducedDimName, "*.jpg"))
{
//if the file dimensions are big, scale the file down
using (Image image = Image.FromFile(file))
{
using (Bitmap bm = ScaleImage(image))
{
bm.Save(fileReducedDimName + #"\" + i.ToString() + ".jpg", ImageFormat.Jpeg);//error occurs here
//this is all redundant code - do not need
//Array.Clear(photoByte, 0, photoByte.Length);
//bm.Dispose();
}
}
//ResizeImage(file, 50, 50, fileReducedDimName +#"\" + i.ToString()+".jpg");
i++;
}
}

How to save image as DICOM

I need to save JPEG image as a DICOM using c# and some free library. I read a lot of topics where it was described how to do the opposite, but I couldn't find anywhere how to perform what I need. The best I could achieve is to save image using ClearCanvas, but it gets distorted.
DicomFile dicomFile = new DicomFile();
dicomFile.MediaStorageSopClassUid = SopClass.DigitalXRayImageStorageForPresentation.Uid;
dicomFile.DataSet[DicomTags.SopClassUid].SetStringValue(SopClass.DigitalXRayImageStorageForPresentation.Uid);
dicomFile.TransferSyntax = TransferSyntax.ExplicitVrLittleEndian;
dicomFile.DataSet[DicomTags.ImageType].SetStringValue(#"ORIGINAL\PRIMARY");
dicomFile.DataSet[DicomTags.Columns].SetInt32(0, width);
dicomFile.DataSet[DicomTags.Rows].SetInt32(0, height);
dicomFile.DataSet[DicomTags.BitsStored].SetInt16(0, bitsPerPixel);
dicomFile.DataSet[DicomTags.BitsAllocated].SetInt16(0, 8);
dicomFile.DataSet[DicomTags.HighBit].SetInt16(0, 7);
dicomFile.DataSet[DicomTags.PixelData].Values = imageBuffer;
dicomFile.Save("e:\\tempFile.dcm");
Can anyone please tell me what's wrong with the code above or provide a simple working example on any other free library?
It is a little bit of code but this is how I do it. This appears to be a common question with duplicates. I pieced this together over time from the Clear Canvas forums, but it is a completely valid answer to the question asked.
If it is needed to create DICOM Secondary Capture images from standard image files like jpg, and you desire for them to work correctly with all the PACS, VNA's, and other DICOM applications out there, then this code here works for that.
OK I have to edit one more time. This I pieced together for fun, I just needed to be able to do it. Some DICOM images I created I added to my test suite, but I had more fun with it than anything. I took the Homer Simpson brain picture and wrapped it. As well the 'When Radiologists take a selfie' picture. Not to forget the last one I did, there was a high quality picture of a X-Ray of a Moray eel in the news fairly recently, so I wrapped that one in DICOM too. Hence the example you see.
Ok even one more edit. Since writing this answer, I have discovered a very valuable ability with this code. I can generate pixel data in any fashion to test our product. Already I can generate DICOM images in Explicit Little Endian, at 10,000 X 10,000 pixels, and that can definitely cause problems in the DICOM products out there, but I can generate it with Clear Canvas without problems!
I can also send data using this code using simple small 5 x 5 pixel images, and it helps so much for testing to build large databases quickly, or ramp up certain backlogs. I only hope someone else finds this as useful as I have.
using ClearCanvas.Dicom.Codec;
using ClearCanvas.Common.Utilities;
using ClearCanvas.Dicom;
using ClearCanvas.Dicom.Network;
using ClearCanvas.Common;
using ClearCanvas.ImageViewer;
using ClearCanvas.ImageViewer.Imaging;
using ClearCanvas.ImageViewer.Graphics;
using ClearCanvas.ImageViewer.StudyManagement;
DicomFile df = null;
Bitmap bm = LoadImage(tbImageFile.Text);
CreateBaseDataSet();
df = ConvertImage(bm, 1);
df.Save(#"C:\test.dcm", DicomWriteOptions.Default);
Then here is all the rest of it:
private void CreateBaseDataSet()
{
_baseDataSet = new DicomAttributeCollection();
//Sop Common
_baseDataSet[DicomTags.SopClassUid].SetStringValue(SopClass.SecondaryCaptureImageStorageUid);
////Patient
//_baseDataSet[DicomTags.PatientId].SetStringValue(_parent.PatientId);
//_baseDataSet[DicomTags.PatientsName].SetStringValue(String.Format("{0}^{1}^{2}^^",
// _parent.LastName, _parent.FirstName, _parent.MiddleName));
//_baseDataSet[DicomTags.PatientsBirthDate].SetDateTime(0, _parent.Dob);
//_baseDataSet[DicomTags.PatientsSex].SetStringValue(_parent.Sex.ToString());
////Study
//_baseDataSet[DicomTags.StudyInstanceUid].SetStringValue(DicomUid.GenerateUid().UID);
//_baseDataSet[DicomTags.StudyDate].SetDateTime(0, _parent.StudyDate);
//_baseDataSet[DicomTags.StudyTime].SetDateTime(0, _parent.StudyTime);
//_baseDataSet[DicomTags.AccessionNumber].SetStringValue(_parent.AccessionNumber);
//_baseDataSet[DicomTags.StudyDescription].SetStringValue(_parent.StudyDescription);
//Patient
_baseDataSet[DicomTags.PatientId].SetStringValue("PIDEEL");
_baseDataSet[DicomTags.PatientsName].SetStringValue(String.Format("Moray^Eel^X-Ray"));
//_baseDataSet[DicomTags.PatientsAddress].SetString (0,"Hubertus");
//_baseDataSet[DicomTags.PatientsBirthDate].SetDateTime(0, DateTime.Now);
//_baseDataSet[DicomTags.PatientsBirthDate].SetString(0, "19550512");
_baseDataSet[DicomTags.PatientsSex].SetStringValue("O");
//Study
_baseDataSet[DicomTags.StudyInstanceUid].SetStringValue(DicomUid.GenerateUid().UID);
_baseDataSet[DicomTags.StudyDate].SetDateTime(0, DateTime.Now);
_baseDataSet[DicomTags.StudyTime].SetDateTime(0, DateTime.Now);
_baseDataSet[DicomTags.AccessionNumber].SetStringValue("ACCEEL");
_baseDataSet[DicomTags.StudyDescription].SetStringValue("X-Ray of a Moray Eel");
_baseDataSet[DicomTags.ReferringPhysiciansName].SetNullValue();
_baseDataSet[DicomTags.StudyId].SetNullValue();
//Series
_baseDataSet[DicomTags.SeriesInstanceUid].SetStringValue(DicomUid.GenerateUid().UID);
_baseDataSet[DicomTags.Modality].SetStringValue("OT");
_baseDataSet[DicomTags.SeriesNumber].SetStringValue("1");
//SC Equipment
_baseDataSet[DicomTags.ConversionType].SetStringValue("WSD");
//General Image
_baseDataSet[DicomTags.ImageType].SetStringValue(#"DERIVED\SECONDARY");
_baseDataSet[DicomTags.PatientOrientation].SetNullValue();
_baseDataSet[DicomTags.WindowWidth].SetStringValue("");
_baseDataSet[DicomTags.WindowCenter].SetStringValue("");
//Image Pixel
if (rbMonoChrome.Checked )
{
_baseDataSet[DicomTags.SamplesPerPixel].SetInt32(0, 1);
_baseDataSet[DicomTags.PhotometricInterpretation].SetStringValue("MONOCHROME2");
_baseDataSet[DicomTags.BitsAllocated].SetInt32(0, 8);
_baseDataSet[DicomTags.BitsStored].SetInt32(0, 8);
_baseDataSet[DicomTags.HighBit].SetInt32(0, 7);
_baseDataSet[DicomTags.PixelRepresentation].SetInt32(0, 0);
_baseDataSet[DicomTags.PlanarConfiguration].SetInt32(0, 0);
}
if (rbColor.Checked)
{
_baseDataSet[DicomTags.SamplesPerPixel].SetInt32(0, 3);
_baseDataSet[DicomTags.PhotometricInterpretation].SetStringValue("RGB");
_baseDataSet[DicomTags.BitsAllocated].SetInt32(0, 8);
_baseDataSet[DicomTags.BitsStored].SetInt32(0, 8);
_baseDataSet[DicomTags.HighBit].SetInt32(0, 7);
_baseDataSet[DicomTags.PixelRepresentation].SetInt32(0, 0);
_baseDataSet[DicomTags.PlanarConfiguration].SetInt32(0, 0);
}
}
private DicomFile ConvertImage(Bitmap image, int instanceNumber)
{
DicomUid sopInstanceUid = DicomUid.GenerateUid();
string fileName = #"C:\test.dcm";// String.Format("{0}.dcm", sopInstanceUid.UID);
//fileName = System.IO.Path.Combine(_tempFileDirectory, fileName);
DicomFile dicomFile = new DicomFile(fileName, new DicomAttributeCollection(), _baseDataSet.Copy());
//meta info
dicomFile.MediaStorageSopInstanceUid = sopInstanceUid.UID;
dicomFile.MediaStorageSopClassUid = SopClass.SecondaryCaptureImageStorageUid;
//General Image
dicomFile.DataSet[DicomTags.InstanceNumber].SetInt32(0, instanceNumber);
DateTime now = Platform.Time;
DateTime time = DateTime.MinValue.Add(new TimeSpan(now.Hour, now.Minute, now.Second));
//SC Image
dicomFile.DataSet[DicomTags.DateOfSecondaryCapture].SetDateTime(0, now);
dicomFile.DataSet[DicomTags.TimeOfSecondaryCapture].SetDateTime(0, time);
//Sop Common
dicomFile.DataSet[DicomTags.InstanceCreationDate].SetDateTime(0, now);
dicomFile.DataSet[DicomTags.InstanceCreationTime].SetDateTime(0, time);
dicomFile.DataSet[DicomTags.SopInstanceUid].SetStringValue(sopInstanceUid.UID);
//int rows, columns;
//Image Pixel
if (rbMonoChrome.Checked)
{
dicomFile.DataSet[DicomTags.PixelData].Values = GetMonochromePixelData(image, out rows, out columns);
}
if (rbColor.Checked)
{
dicomFile.DataSet[DicomTags.PixelData].Values = GetColorPixelData(image, out rows, out columns);
}
//Image Pixel
dicomFile.DataSet[DicomTags.Rows].SetInt32(0, rows);
dicomFile.DataSet[DicomTags.Columns].SetInt32(0, columns);
return dicomFile;
}
private static byte[] GetMonochromePixelData(Bitmap image, out int rows, out int columns)
{
rows = image.Height;
columns = image.Width;
//At least one of rows or columns must be even.
if (rows % 2 != 0 && columns % 2 != 0)
--columns; //trim the last column.
int size = rows * columns;
//byte[] pixelData = MemoryManager.Allocate<byte>(size);
byte[] pixelData = new byte[size];
int i = 0;
for (int row = 0; row < rows; ++row)
{
for (int column = 0; column < columns; column++)
{
pixelData[i++] = image.GetPixel(column, row).R;
}
}
return pixelData;
}
private static byte[] GetColorPixelData(Bitmap image, out int rows, out int columns)
{
rows = image.Height;
columns = image.Width;
//At least one of rows or columns must be even.
if (rows % 2 != 0 && columns % 2 != 0)
--columns; //trim the last column.
BitmapData data = image.LockBits(new Rectangle(0, 0, columns, rows), ImageLockMode.ReadOnly, image.PixelFormat);
IntPtr bmpData = data.Scan0;
try
{
int stride = columns * 3;
int size = rows * stride;
//byte[] pixelData = MemoryManager.Allocate<byte>(size);
byte[] pixelData = new byte[size];
for (int i = 0; i < rows; ++i)
Marshal.Copy(new IntPtr(bmpData.ToInt64() + i * data.Stride), pixelData, i * stride, stride);
//swap BGR to RGB
SwapRedBlue(pixelData);
return pixelData;
}
finally
{
image.UnlockBits(data);
}
}
private static Bitmap LoadImage(string file)
{
Bitmap image = Image.FromFile(file, true) as Bitmap;
if (image == null)
throw new ArgumentException(String.Format("The specified file cannot be loaded as a bitmap {0}.", file));
if (image.PixelFormat != PixelFormat.Format24bppRgb)
{
Platform.Log(LogLevel.Info, "Attempting to convert non RBG image to RGB ({0}) before converting to Dicom.", file);
Bitmap old = image;
using (old)
{
image = new Bitmap(old.Width, old.Height, PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage(image))
{
g.DrawImage(old, 0, 0, old.Width, old.Height);
}
}
}
return image;
}
private static void SwapRedBlue(byte[] pixels)
{
for (int i = 0; i < pixels.Length; i += 3)
{
byte temp = pixels[i];
pixels[i] = pixels[i + 2];
pixels[i + 2] = temp;
}
}

Byte size of image grow after split and merge of a Tiff file

I am trying to split and merge a multi-page tiff image. The reason I split is to draw annotations at each image level. The code is working fine, however the merged tiff is pretty large compared to source tiff. For example, I have tested with 17 color pages tiff image(size is 5MB), after splitting and merging, it produces 85MB tiff image. I am using BitMiracle.LibTiff.I actually commented the annotations code temporary as well to resolve this size issue, I am not sure what I am doing wrong.. Here is the code to split.
private List<Bitmap> SplitTiff(byte[] imageData)
{
var bitmapList = new List<Bitmap>();
var tiffStream = new TiffStreamForBytes(imageData);
//open tif file
var tif = Tiff.ClientOpen("", "r", null, tiffStream);
//get number of pages
var num = tif.NumberOfDirectories();
if (num == 1)
return new List<Bitmap>
{
new Bitmap(GetImage(imageData))
};
for (short i = 0; i < num; i++)
{
//set current page
tif.SetDirectory(i);
FieldValue[] photoMetric = tif.GetField(TiffTag.PHOTOMETRIC);
Photometric photo = Photometric.MINISBLACK;
if (photoMetric != null && photoMetric.Length > 0)
photo = (Photometric)photoMetric[0].ToInt();
if (photo != Photometric.MINISBLACK && photo != Photometric.MINISWHITE)
bitmapList.Add(GetBitmapFromTiff(tif));
// else
// bitmapList.Add(GetBitmapFromTiffBlack(tif, photo));// commented temporrarly to fix size issue
}
return bitmapList;
}
private static Bitmap GetBitmapFromTiff(Tiff tif)
{
var value = tif.GetField(TiffTag.IMAGEWIDTH);
var width = value[0].ToInt();
value = tif.GetField(TiffTag.IMAGELENGTH);
var height = value[0].ToInt();
//Read the image into the memory buffer
var raster = new int[height * width];
if (!tif.ReadRGBAImage(width, height, raster))
{
return null;
}
var bmp = new Bitmap(width, height, PixelFormat.Format32bppRgb);
var rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
var bmpdata = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format32bppRgb);
var bits = new byte[bmpdata.Stride * bmpdata.Height];
for (var y = 0; y < bmp.Height; y++)
{
int rasterOffset = y * bmp.Width;
int bitsOffset = (bmp.Height - y - 1) * bmpdata.Stride;
for (int x = 0; x < bmp.Width; x++)
{
int rgba = raster[rasterOffset++];
bits[bitsOffset++] = (byte)((rgba >> 16) & 0xff);
bits[bitsOffset++] = (byte)((rgba >> 8) & 0xff);
bits[bitsOffset++] = (byte)(rgba & 0xff);
bits[bitsOffset++] = (byte)((rgba >> 24) & 0xff);
}
}
System.Runtime.InteropServices.Marshal.Copy(bits, 0, bmpdata.Scan0, bits.Length);
bmp.UnlockBits(bmpdata);
return bmp;
}
and the code to merge individual Bitmaps to tiff is here...
public static PrizmImage PrizmImageFromBitmaps(List<Bitmap> imageItems, string ext)
{
if (imageItems.Count == 1 && !(ext.ToLower().Equals(".tif") || ext.ToLower().Equals(".tiff")))
return new PrizmImage(new MemoryStream(ImageUtility.BitmapToByteArray(imageItems[0])), ext);
var codecInfo = GetCodecInfo();
var memoryStream = new MemoryStream();
var encoderParams = new EncoderParameters(1);
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.MultiFrame);
var initialImage = imageItems[0];
var masterBitmap = imageItems[0];// new Bitmap(initialImage);
masterBitmap.Save(memoryStream, codecInfo, encoderParams);
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.FrameDimensionPage);
for (var i = 1; i < imageItems.Count; i++)
{
var img = imageItems[i];
masterBitmap.SaveAdd(img, encoderParams);
img.Dispose();
}
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.Flush);
masterBitmap.SaveAdd(encoderParams);
memoryStream.Seek(0, SeekOrigin.Begin);
encoderParams.Dispose();
masterBitmap.Dispose();
return new PrizmImage(memoryStream, ext);
}
Most probably, the issue is caused by the fact you are converting all images to 32 bits-per-pixel bitmaps.
Suppose, you have a black and white fax-encoded image. It might be encoded as a 100 Kb TIFF file. The same image might take 10+ megabytes when you save it as a 32bpp bitmap. Compressing these megabytes will help, but you never achieve the same compression ratio as in source image because you increased amount of image data from 1 bit per pixel to 32 bits per pixel.
So, you should not convert images to 32bpp bitmaps, if possible. Try preserve their properties and compression as much as possible. Have a look at source code of the TiffCP utility for hints how to do that.
If you absolutely have to convert images to 32bpp bitmaps (you might have to if you add colorful annotations to them) then there is not much can be done to reduce the resulting size. You might decrease output size by 10-20% percents if you choose better compression scheme and tune up the scheme properly. But that's all, I am afraid.

Kinect depth detection

I know how to do it in WPF but I have problem for capturing depth in winforms application.
I found some code as below:
private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
if (_depthPixels.Length != depthFrame.PixelDataLength)
{
_depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
_mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
}
//Copy the depth frame data onto the bitmap
var _pixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(_pixelData);
BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
DepthBitmap.UnlockBits(bmapdata);
pictureBox2.Image = DepthBitmap;
}
}
}
but this is not giving me the greyScale depth and it's purple. Any improvement or help?
I found the solution myself, by a function to convert the depth frame:
void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
//Update the image to the new format
this.depthPixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(this.depthPixelData);
byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
bmap.UnlockBits(bmapdata);
pictureBox2.Image = bmap;
}
}
}
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
//Run through the depth frame making the correlation between the two arrays
for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
{
// Console.WriteLine(i16 + "," + i32);
//We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
//There are hundreds of ways to do this. This is just the simplest one.
//Lets create a byte variable called Distance.
//We will assign this variable a number that will come from the conversion of those 13 bits.
byte Distance = 0;
//XBox Kinects (default) are limited between 800mm and 4096mm.
int MinimumDistance = 800;
int MaximumDistance = 4096;
//XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
//If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
if (realDepth > MinimumDistance)
{
//Convert the realDepth into the 0 to 255 range for our actual distance.
//Use only one of the following Distance assignments
//White = Far
//Black = Close
//Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
//White = Close
//Black = Far
Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
//Use the distance to paint each layer (R G & of the current pixel.
//Painting R, G and B with the same color will make it go from black to gray
this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
}
//If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
else
{
this.depthFrame32[i32 + RedIndex] = 0;
this.depthFrame32[i32 + GreenIndex] = 0;
this.depthFrame32[i32 + BlueIndex] = 0;
}
}
so i presume that rgb frame is working out for you in that case:
first to enable depth camera you need to call:
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);
second to start streaming you need to call:
if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_DEPTH, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
2, // Number of frames to buffer
NULL, // Event handle
&stream_hzed); else stream_hzed=NULL;
beware not all resolution/flags combinations work on all models of kinect !!!
this one above is safe even for the older models like mine
this is how i capture frame (called repeatedly from timer or thread loop)
ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
{
// copy data from frame
imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
if (LockedRect.Pitch!=0)
{
const BYTE* curr = (const BYTE*) LockedRect.pBits;
union _col { BYTE u8[2]; WORD u16; } col;
col.u16=0;
pnt3d p;
long ax,ay;
float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
for(int x=0,y=0;;)
{
col.u8[0]=*curr; curr++;
col.u8[1]=*curr; curr++;
p.raw=col.u16;
p.rgb=&rgb_default;
if (p.raw==0x0000) p.z=0.0; // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
else if (p.raw>=0x8000) p.z=4.0;
else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
// depth FOV correction
p.x=zx[x]*p.z;
p.y=zy[y]*p.z;
// color FOV correction zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
if (p.z>0.0)
{
ax=(((x+10-xs2)*241)>>8)+xs2; // cameras x-offset and different FOV
ay=(((y+30-ys2)*240)>>8)+ys2; // cameras y-offset??? and different FOV
if ((ax>=0)&&(ax<xs))
if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
}
xyz[y][x]=p;
x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
}
}
// release frame
imageFrame.pFrameTexture->UnlockRect(0);
ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
stream_changed|=_Kinect_zed;
}
Sorry for incomplete source code ...
- all is copy pasted from my kinect class (BDS2006 Turbo C++)
- so you need to check your code if you do not forget something
- and if yes then transform my code to C# (i am not C# user)
- most likely you forget to NUIinitialize with depth flag
- or set invalid resolution/flags/ precision or framerate for your HW
if nothing work at all then you need to initialize the sensor in the first place
int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;
if you link to dll on your own then link only these functions:
typedef HRESULT(__stdcall *_NuiGetSensorCount )(int * pCount); _NuiGetSensorCount NUIGetSensorCount =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor); _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
Every other function (must) is obtained via COM inside SDK headers !!!
if you link and use them on your own then you will not be connected to your physical Kinect !!!
Basically kinect sdk is developed for WPf application. In windows form you have convert the short array of the depth data to the BItmap to display it on picturebox. And based on my expriment WPF is better for programming with kinect.
Below is the function that I used to convert depth frame to Bitmap for showing in picture box.
private Bitmap ImageToBitmap(DepthImageFrame Image)
{
short[] pixeldata = new short[Image.PixelDataLength];
int stride = Image.Width * 2;
Image.CopyPixelDataTo(pixeldata);
Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
bmap.UnlockBits(bmapdata);
return bmap;
}
You may call it like this:
DepthImageFrame VFrame = e.OpenDepthImageFrame();
if (VFrame == null) return;
short[] pixelS = new short[VFrame.PixelDataLength];
Bitmap bmap = ImageToBitmap(VFrame);

Categories

Resources