Directshow & .Net - Bitmap shows stripe from right on left side of image? - c#

Sample Image:
I'm using DirectShow.net to get webcam footage into my program.
To accomplish this, I'm adding the source camera to the graph, and a VideoMixingRenderer9.
That part is all working swimmingly, but the part where I extract a frame using GetCurrentImage(out lpDib) is having what I can only describe as an odd issue.
What I am doing is using Marshal.PtrToSTructure to create a BitmapInfoHeader from lpDib, then calculating the width / height / stride / & pixel format.
The problem comes when I look at the image stored in bitmap - It has a 10 px wide line down the left side that came from what is actually the right!
It is worth noting that the data I am getting from the GetCurrentImage call is actually upside down - note the call to Cap.RotateFlip.
IntPtr lpDib;
windowlessCtrl.GetCurrentImage(out lpDib);
BitmapInfoHeader head;
head = (BitmapInfoHeader)Marshal.PtrToStructure(lpDib, typeof(BitmapInfoHeader));
int width = head.Width;
int height = head.Height;
int stride = width * (head.BitCount / 8);
PixelFormat pixelFormat = PixelFormat.Format24bppRgb;
switch (head.BitCount)
{
case 24: pixelFormat = PixelFormat.Format24bppRgb; break;
case 32: pixelFormat = PixelFormat.Format32bppRgb; break;
case 48: pixelFormat = PixelFormat.Format48bppRgb; break;
default: throw new Exception("Unknown BitCount");
}
Cap = new Bitmap(width, height, stride, pixelFormat, lpDib);
Cap.RotateFlip(RotateFlipType.RotateNoneFlipY);
//if we examine Cap here (Cap.Save, for example) I'm seeing the odd stripe.
I'm completely lost here. Seems like some sort of offset issue, and I've tried tweaking with stride some, but to no avail (just creates odd diagonal look).

This code is made using the DirectShowLib samples and it works:
public Bitmap GetCurrentImage()
{
Bitmap bmp = null;
if (windowlessCtrl != null)
{
IntPtr currentImage = IntPtr.Zero;
try
{
int hr = windowlessCtrl.GetCurrentImage(out currentImage);
DsError.ThrowExceptionForHR(hr);
if (currentImage != IntPtr.Zero)
{
BitmapInfoHeader structure = new BitmapInfoHeader();
Marshal.PtrToStructure(currentImage, structure);
PixelFormat pixelFormat = PixelFormat.Format24bppRgb;
switch (structure.BitCount)
{
case 24:
pixelFormat = PixelFormat.Format24bppRgb;
break;
case 32:
pixelFormat = PixelFormat.Format32bppRgb;
break;
case 48:
pixelFormat = PixelFormat.Format48bppRgb;
break;
default:
throw new Exception("BitCount desconhecido");
}
// este trecho: new IntPtr(currentImage.ToInt64() + 40), é o que resolve o problema da faixa (strip) da direita na esquerda.
bmp = new Bitmap(structure.Width, structure.Height, (structure.BitCount / 8) * structure.Width, pixelFormat, new IntPtr(currentImage.ToInt64() + 40));
bmp.RotateFlip(RotateFlipType.RotateNoneFlipY);
}
}
catch (Exception anyException)
{
MessageBox.Show("Falha gravando imagem da Webcam: " + anyException.ToString());
}
finally
{
Marshal.FreeCoTaskMem(currentImage);
}
}
return bmp;
}

The video renderer is extending the bitmap to suit its own memory alignment needs, but it will have adjusted the media type to match. The VIDEOINFOHEADER (or VIDEOINFOHEADER2 structure) in the media type will have an rcTarget rectangle that defines the valid area within the larger bitmap. You can query the current media type on the input pin and get hold of this information.
You will find that the renderer only needs this extended stride for some formats, so perhaps your simplest approach is to force a different capture format. An alternative is to use the sample grabber filter instead of the VMR.
G

For those who want to avoid using SampleGrabber. The "stripe" issue can be fixed by adding the offset of the bitmap header to the IntPtr. However this requires unsafe code
IntPtr pBuffer = IntPtr.Zero;
int xBufferSize = 0;
int xWidth, xHeight;
basicVideo.get_VideoWidth(out xWidth);
basicVideo.get_VideoHeight(out xHeight);
int hr = basicVideo.GetCurrentImage(ref xBufferSize, IntPtr.Zero);
pBuffer = Marshal.AllocCoTaskMem(xBufferSize);
// Get the pixel buffer for the thumbnail
hr = basicVideo.GetCurrentImage(ref xBufferSize, pBuffer);
// Offset for BitmapHeader info
var bitmapHeader = (BitmapInfoHeader)Marshal.PtrToStructure(pBuffer, typeof(BitmapInfoHeader));
var pBitmapData = (byte*)pBuffer.ToPointer();
pBitmapData += bitmapHeader.Size;
// This will be the pointer to the bitmap pixels
var bitmapData = new IntPtr(pBitmapData);
//Change for your format type!
System.Drawing.Imaging.PixelFormat xFormat = (System.Drawing.Imaging.PixelFormat.Format32bppRgb);
int bitsPerPixel = ((int)xFormat & 0xff00) >> 8;
int bytesPerPixel = (bitsPerPixel + 7) / 8;
int stride = 4 * ((xWidth * bytesPerPixel + 3) / 4);
Bitmap image = new Bitmap(xWidth, xHeight, stride, xFormat, bitmapData);
image.RotateFlip(RotateFlipType.RotateNoneFlipY);
return image;
Calculation for Stride can be found here.

Related

Edge of the image is cutted out when conver DIBto ImageSource

I use below code to convert DIB from scanner with TWAIN to BitmapSource, but I found that the edge of image is cutted off. Image size is different with source image. Is there any thing wrong in below code when handel image?
/// <summary>
/// Get managed BitmapSource from a DIB provided as a low level windows hadle
///
/// Notes:
/// Data is copied from the source so the windows handle can be saftely discarded
/// even when the BitmapSource is in use.
///
/// Only a subset of possible DIB forrmats is supported.
///
/// </summary>
/// <param name="dibHandle"></param>
/// <returns>A copy of the image in a managed BitmapSource </returns>
///
public static BitmapSource FormHDib(IntPtr dibHandle)
{
BitmapSource bs = null;
IntPtr bmpPtr = IntPtr.Zero;
bool flip = true; // vertivcally flip the image
try {
bmpPtr = Win32.GlobalLock(dibHandle);
Win32.BITMAPINFOHEADER bmi = new Win32.BITMAPINFOHEADER();
Marshal.PtrToStructure(bmpPtr, bmi);
if (bmi.biSizeImage == 0)
bmi.biSizeImage = (uint)(((((bmi.biWidth * bmi.biBitCount) + 31) & ~31) >> 3) * bmi.biHeight);
int palettSize = 0;
if (bmi.biClrUsed != 0)
throw new NotSupportedException("DibToBitmap: DIB with pallet is not supported");
// pointer to the beginning of the bitmap bits
IntPtr pixptr = (IntPtr)((int)bmpPtr + bmi.biSize + palettSize);
// Define parameters used to create the BitmapSource.
PixelFormat pf = PixelFormats.Default;
switch (bmi.biBitCount) {
case 32:
pf = PixelFormats.Bgr32;
break;
case 24:
pf = PixelFormats.Bgr24;
break;
case 8:
pf = PixelFormats.Gray8;
break;
case 1:
pf = PixelFormats.BlackWhite;
break;
default: // not supported
throw new NotSupportedException("DibToBitmap: Can't determine picture format (biBitCount=" + bmi.biBitCount + ")");
// break;
}
int width = bmi.biWidth;
int height = bmi.biHeight;
int stride = (int)(bmi.biSizeImage / height);
byte[] imageBytes = new byte[stride * height];
//Debug: Initialize the image with random data.
//Random value = new Random();
//value.NextBytes(rawImage);
if (flip) {
for (int i = 0, j = 0, k = (height - 1) * stride; i < height; i++, j += stride, k -= stride)
Marshal.Copy(((IntPtr)((int)pixptr + j)), imageBytes, k, stride);
} else {
Marshal.Copy(pixptr, imageBytes, 0, imageBytes.Length);
}
int xDpi = (int)Math.Round(bmi.biXPelsPerMeter * 2.54 / 100); // pels per meter to dots per inch
int yDpi = (int)Math.Round(bmi.biYPelsPerMeter * 2.54 / 100);
// Create a BitmapSource.
bs = BitmapSource.Create(width, height, xDpi, yDpi, pf, null, imageBytes, stride);
Win32.GlobalUnlock(pixptr);
Win32.GlobalFree(pixptr);
pixptr = IntPtr.Zero;
imageBytes = null;
bmi = null;
} catch (Exception ex) {
//string msg = ex.Message;
}
finally {
// cleanup
if (bmpPtr != IntPtr.Zero) { // locked sucsessfully
Win32.GlobalUnlock(dibHandle);
}
}
Win32.GlobalUnlock(bmpPtr);
Win32.GlobalFree(bmpPtr);
bmpPtr = IntPtr.Zero;
return bs;
}
This code is from here.
I find the root reason now, it's caused by wrong TWAIN setting.
Set the Edge padding to nothing will solve this problem.

Kinect depth detection

I know how to do it in WPF but I have problem for capturing depth in winforms application.
I found some code as below:
private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
if (_depthPixels.Length != depthFrame.PixelDataLength)
{
_depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
_mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
}
//Copy the depth frame data onto the bitmap
var _pixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(_pixelData);
BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
DepthBitmap.UnlockBits(bmapdata);
pictureBox2.Image = DepthBitmap;
}
}
}
but this is not giving me the greyScale depth and it's purple. Any improvement or help?
I found the solution myself, by a function to convert the depth frame:
void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
//Update the image to the new format
this.depthPixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(this.depthPixelData);
byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
bmap.UnlockBits(bmapdata);
pictureBox2.Image = bmap;
}
}
}
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
//Run through the depth frame making the correlation between the two arrays
for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
{
// Console.WriteLine(i16 + "," + i32);
//We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
//There are hundreds of ways to do this. This is just the simplest one.
//Lets create a byte variable called Distance.
//We will assign this variable a number that will come from the conversion of those 13 bits.
byte Distance = 0;
//XBox Kinects (default) are limited between 800mm and 4096mm.
int MinimumDistance = 800;
int MaximumDistance = 4096;
//XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
//If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
if (realDepth > MinimumDistance)
{
//Convert the realDepth into the 0 to 255 range for our actual distance.
//Use only one of the following Distance assignments
//White = Far
//Black = Close
//Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
//White = Close
//Black = Far
Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
//Use the distance to paint each layer (R G & of the current pixel.
//Painting R, G and B with the same color will make it go from black to gray
this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
}
//If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
else
{
this.depthFrame32[i32 + RedIndex] = 0;
this.depthFrame32[i32 + GreenIndex] = 0;
this.depthFrame32[i32 + BlueIndex] = 0;
}
}
so i presume that rgb frame is working out for you in that case:
first to enable depth camera you need to call:
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);
second to start streaming you need to call:
if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_DEPTH, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
2, // Number of frames to buffer
NULL, // Event handle
&stream_hzed); else stream_hzed=NULL;
beware not all resolution/flags combinations work on all models of kinect !!!
this one above is safe even for the older models like mine
this is how i capture frame (called repeatedly from timer or thread loop)
ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
{
// copy data from frame
imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
if (LockedRect.Pitch!=0)
{
const BYTE* curr = (const BYTE*) LockedRect.pBits;
union _col { BYTE u8[2]; WORD u16; } col;
col.u16=0;
pnt3d p;
long ax,ay;
float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
for(int x=0,y=0;;)
{
col.u8[0]=*curr; curr++;
col.u8[1]=*curr; curr++;
p.raw=col.u16;
p.rgb=&rgb_default;
if (p.raw==0x0000) p.z=0.0; // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
else if (p.raw>=0x8000) p.z=4.0;
else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
// depth FOV correction
p.x=zx[x]*p.z;
p.y=zy[y]*p.z;
// color FOV correction zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
if (p.z>0.0)
{
ax=(((x+10-xs2)*241)>>8)+xs2; // cameras x-offset and different FOV
ay=(((y+30-ys2)*240)>>8)+ys2; // cameras y-offset??? and different FOV
if ((ax>=0)&&(ax<xs))
if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
}
xyz[y][x]=p;
x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
}
}
// release frame
imageFrame.pFrameTexture->UnlockRect(0);
ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
stream_changed|=_Kinect_zed;
}
Sorry for incomplete source code ...
- all is copy pasted from my kinect class (BDS2006 Turbo C++)
- so you need to check your code if you do not forget something
- and if yes then transform my code to C# (i am not C# user)
- most likely you forget to NUIinitialize with depth flag
- or set invalid resolution/flags/ precision or framerate for your HW
if nothing work at all then you need to initialize the sensor in the first place
int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;
if you link to dll on your own then link only these functions:
typedef HRESULT(__stdcall *_NuiGetSensorCount )(int * pCount); _NuiGetSensorCount NUIGetSensorCount =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor); _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
Every other function (must) is obtained via COM inside SDK headers !!!
if you link and use them on your own then you will not be connected to your physical Kinect !!!
Basically kinect sdk is developed for WPf application. In windows form you have convert the short array of the depth data to the BItmap to display it on picturebox. And based on my expriment WPF is better for programming with kinect.
Below is the function that I used to convert depth frame to Bitmap for showing in picture box.
private Bitmap ImageToBitmap(DepthImageFrame Image)
{
short[] pixeldata = new short[Image.PixelDataLength];
int stride = Image.Width * 2;
Image.CopyPixelDataTo(pixeldata);
Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
bmap.UnlockBits(bmapdata);
return bmap;
}
You may call it like this:
DepthImageFrame VFrame = e.OpenDepthImageFrame();
if (VFrame == null) return;
short[] pixelS = new short[VFrame.PixelDataLength];
Bitmap bmap = ImageToBitmap(VFrame);

How to extract FlateDecoded Images from PDF with PDFSharp

how do I extract Images, which are FlateDecoded (such like PNG) out of a PDF-Document with PDFSharp?
I found that comment in a Sample of PDFSharp:
// TODO: You can put the code here that converts vom PDF internal image format to a
// Windows bitmap
// and use GDI+ to save it in PNG format.
// [...]
// Take a look at the file
// PdfSharp.Pdf.Advanced/PdfImage.cs to see how we create the PDF image formats.
Does anyone have a solution for this problem?
Thanks for your replies.
EDIT: Because I'm not able to answer on my own Question within 8 hours, I do it on that way:
Thanks for your very fast reply.
I added some Code to the Method "ExportAsPngImage", but I didn't get the wanted results. It is just extracting a few more Images (png) and they don't have the right colors and are distorted.
Here's my actual Code:
PdfSharp.Pdf.Filters.FlateDecode flate = new PdfSharp.Pdf.Filters.FlateDecode();
byte[] decodedBytes = flate.Decode(bytes);
System.Drawing.Imaging.PixelFormat pixelFormat;
switch (bitsPerComponent)
{
case 1:
pixelFormat = PixelFormat.Format1bppIndexed;
break;
case 8:
pixelFormat = PixelFormat.Format8bppIndexed;
break;
case 24:
pixelFormat = PixelFormat.Format24bppRgb;
break;
default:
throw new Exception("Unknown pixel format " + bitsPerComponent);
}
Bitmap bmp = new Bitmap(width, height, pixelFormat);
var bmpData = bmp.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, pixelFormat);
int length = (int)Math.Ceiling(width * bitsPerComponent / 8.0);
for (int i = 0; i < height; i++)
{
int offset = i * length;
int scanOffset = i * bmpData.Stride;
Marshal.Copy(decodedBytes, offset, new IntPtr(bmpData.Scan0.ToInt32() + scanOffset), length);
}
bmp.UnlockBits(bmpData);
using (FileStream fs = new FileStream(#"C:\Export\PdfSharp\" + String.Format("Image{0}.png", count), FileMode.Create, FileAccess.Write))
{
bmp.Save(fs, System.Drawing.Imaging.ImageFormat.Png);
}
Is that the right way? Or should I choose another way? Thanks a lot!
I know this answer might be a few years to late, but maybe it will help others.
The disortion occurs in my case because image.Elements.GetInteger(PdfImage.Keys.BitsPerComponent) seems to not return the correct value. As Vive la déraison pointed out under your question, you get the BGR Format for using Marshal.Copy. So reversing the Bytes and rotating the Bitmap after executing Marshal.Copy will do the job.
The resulting code looks like this:
private static void ExportAsPngImage(PdfDictionary image, ref int count)
{
int width = image.Elements.GetInteger(PdfImage.Keys.Width);
int height = image.Elements.GetInteger(PdfImage.Keys.Height);
var canUnfilter = image.Stream.TryUnfilter();
byte[] decodedBytes;
if (canUnfilter)
{
decodedBytes = image.Stream.Value;
}
else
{
PdfSharp.Pdf.Filters.FlateDecode flate = new PdfSharp.Pdf.Filters.FlateDecode();
decodedBytes = flate.Decode(image.Stream.Value);
}
int bitsPerComponent = 0;
while (decodedBytes.Length - ((width * height) * bitsPerComponent / 8) != 0)
{
bitsPerComponent++;
}
System.Drawing.Imaging.PixelFormat pixelFormat;
switch (bitsPerComponent)
{
case 1:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format1bppIndexed;
break;
case 8:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format8bppIndexed;
break;
case 16:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format16bppArgb1555;
break;
case 24:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format24bppRgb;
break;
case 32:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format32bppArgb;
break;
case 64:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format64bppArgb;
break;
default:
throw new Exception("Unknown pixel format " + bitsPerComponent);
}
decodedBytes = decodedBytes.Reverse().ToArray();
Bitmap bmp = new Bitmap(width, height, pixelFormat);
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.WriteOnly, bmp.PixelFormat);
int length = (int)Math.Ceiling(width * (bitsPerComponent / 8.0));
for (int i = 0; i < height; i++)
{
int offset = i * length;
int scanOffset = i * bmpData.Stride;
Marshal.Copy(decodedBytes, offset, new IntPtr(bmpData.Scan0.ToInt32() + scanOffset), length);
}
bmp.UnlockBits(bmpData);
bmp.RotateFlip(RotateFlipType.Rotate180FlipNone);
bmp.Save(String.Format("exported_Images\\Image{0}.png", count++), System.Drawing.Imaging.ImageFormat.Png);
}
The code might need some optimisation, but it did export FlateDecoded Images correctly in my case.
To get a Windows BMP, you just have to create a Bitmap header and then copy the image data into the bitmap. PDF images are byte aligned (every new line starts on a byte boundary) while Windows BMPs are DWORD aligned (every new line starts on a DWORD boundary (a DWORD is 4 bytes for historical reasons)).
All information you need for the Bitmap header can be found in the filter parameters or can be calculated.
The color palette is another FlateEncoded object in the PDF. You also copy that into the BMP.
This must be done for several formats (1 bit per pixel, 8 bpp, 24 bpp, 32 bpp).
Here's my full code for doing this.
I'm extracting a UPS shipping label from a PDF so I know the format in advance. If your extracted image is of an unknown type then you'll need to check the bitsPerComponent and handle it accordingly. I also only handle the first image here on the first page.
Note: I'm using TryUnfilter to 'deflate' which uses whatever filter is applied and decodes the data in-place for me. No need to call 'Deflate' explicitly.
var file = #"c:\temp\PackageLabels.pdf";
var doc = PdfReader.Open(file);
var page = doc.Pages[0];
{
// Get resources dictionary
PdfDictionary resources = page.Elements.GetDictionary("/Resources");
if (resources != null)
{
// Get external objects dictionary
PdfDictionary xObjects = resources.Elements.GetDictionary("/XObject");
if (xObjects != null)
{
ICollection<PdfItem> items = xObjects.Elements.Values;
// Iterate references to external objects
foreach (PdfItem item in items)
{
PdfReference reference = item as PdfReference;
if (reference != null)
{
PdfDictionary xObject = reference.Value as PdfDictionary;
// Is external object an image?
if (xObject != null && xObject.Elements.GetString("/Subtype") == "/Image")
{
// do something with your image here
// only the first image is handled here
var bitmap = ExportImage(xObject);
bmp.Save(#"c:\temp\exported.png", System.Drawing.Imaging.ImageFormat.Bmp);
}
}
}
}
}
}
Using these helper functions
private static Bitmap ExportImage(PdfDictionary image)
{
string filter = image.Elements.GetName("/Filter");
switch (filter)
{
case "/FlateDecode":
return ExportAsPngImage(image);
default:
throw new ApplicationException(filter + " filter not implemented");
}
}
private static Bitmap ExportAsPngImage(PdfDictionary image)
{
int width = image.Elements.GetInteger(PdfImage.Keys.Width);
int height = image.Elements.GetInteger(PdfImage.Keys.Height);
int bitsPerComponent = image.Elements.GetInteger(PdfImage.Keys.BitsPerComponent);
var canUnfilter = image.Stream.TryUnfilter();
var decoded = image.Stream.Value;
Bitmap bmp = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.WriteOnly, bmp.PixelFormat);
Marshal.Copy(decoded, 0, bmpData.Scan0, decoded.Length);
bmp.UnlockBits(bmpData);
return bmp;
}
So far... my code... it works with many png files, but not the one that comes from adobe photoshop with colorspace indexed:
private bool ExportAsPngImage(PdfDictionary image, string SaveAsName)
{
int width = image.Elements.GetInteger(PdfSharp.Pdf.Advanced.PdfImage.Keys.Width);
int height = image.Elements.GetInteger(PdfSharp.Pdf.Advanced.PdfImage.Keys.Height);
int bitsPerComponent = image.Elements.GetInteger(PdfSharp.Pdf.Advanced.PdfImage.Keys.BitsPerComponent);
var ColorSpace = image.Elements.GetArray(PdfImage.Keys.ColorSpace);
System.Drawing.Imaging.PixelFormat pixelFormat= System.Drawing.Imaging.PixelFormat.Format24bppRgb; //24 just for initalize
if (ColorSpace is null) //no colorspace.. bufferedimage?? is in BGR order instead of RGB so change the byte order. Right now it works
{
byte[] origineel_byte_boundary = image.Stream.UnfilteredValue;
bitsPerComponent = (origineel_byte_boundary.Length) / (width * height);
switch (bitsPerComponent)
{
case 4:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format32bppPArgb;
break;
case 3:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format24bppRgb;
break;
default:
{
MessageBox.Show("Unknown pixel format " + bitsPerComponent, "Error", MessageBoxButtons.OK, MessageBoxIcon.Warning);
return false;
}
break;
}
Bitmap bmp = new Bitmap(width, height, pixelFormat); //copy raw bytes to "master" bitmap so we are out of pdf format to work with
System.Drawing.Imaging.BitmapData bmd = bmp.LockBits(new Rectangle(0, 0, width, height), System.Drawing.Imaging.ImageLockMode.WriteOnly, pixelFormat);
System.Runtime.InteropServices.Marshal.Copy(origineel_byte_boundary, 0, bmd.Scan0, origineel_byte_boundary.Length);
bmp.UnlockBits(bmd);
Bitmap bmp2 = new Bitmap(width, height, pixelFormat);
for (int indicex = 0; indicex < bmp.Width; indicex++)
{
for (int indicey = 0; indicey < bmp.Height; indicey++)
{
Color nuevocolor = bmp.GetPixel(indicex, indicey);
Color colorintercambiado = Color.FromArgb(nuevocolor.A, nuevocolor.B, nuevocolor.G, nuevocolor.R);
bmp2.SetPixel(indicex, indicey, colorintercambiado);
}
}
using (FileStream fs = new FileStream(SaveAsName, FileMode.Create, FileAccess.Write))
{
bmp2.Save(fs, System.Drawing.Imaging.ImageFormat.Png);
}
bmp2.Dispose();
bmp.Dispose();
}
else
{
// this is the case of photoshop... work needs to be done here. I ´m able to get the color palette but no idea how to put it back or create the png file...
switch (bitsPerComponent)
{
case 4:
pixelFormat = System.Drawing.Imaging.PixelFormat.Format32bppArgb;
break;
default:
{
MessageBox.Show("Unknown pixel format " + bitsPerComponent, "Error", MessageBoxButtons.OK, MessageBoxIcon.Warning);
return false;
}
break;
}
if ((ColorSpace.Elements.GetName(0) == "/Indexed") && (ColorSpace.Elements.GetName(1) == "/DeviceRGB"))
{
//we need to create the palette
int paletteColorCount = ColorSpace.Elements.GetInteger(2);
List<System.Drawing.Color> paletteList = new List<Color>();
//Color[] palette = new Color[paletteColorCount+1]; // no idea why but it seams that there´s always 1 color more. ¿transparency?
PdfObject paletteObj = ColorSpace.Elements.GetObject(3);
PdfDictionary paletteReference = (PdfDictionary)paletteObj;
byte[] palettevalues = paletteReference.Stream.Value;
for (int index = 0; index < (paletteColorCount + 1); index++)
{
//palette[index] = Color.FromArgb(1, palettevalues[(index*3)], palettevalues[(index*3)+1], palettevalues[(index*3)+2]); // RGB
paletteList.Add(Color.FromArgb(1, palettevalues[(index * 3)], palettevalues[(index * 3) + 1], palettevalues[(index * 3) + 2])); // RGB
}
}
}
return true;
}
PDF may contain images with masks and with different colorspace options that is why simply decoding an image object may not work properly in some cases.
So the code also needs to check for image masks (/ImageMask) and other properties of image objects (to see if image should also use inverted colors or uses indexed colors) inside PDF to recreate the image similar to how it is displayed in PDF. See Image object, /ImageMask and /Decode dictionaries in the official PDF Reference.
Not sure if PDFSharp is capable of finding Image Mask objects inside PDF but iTextSharp is able to access image mask objects (see PdfName.MASK object types).
Commercial tools like PDF Extractor SDK are able to extract images in both original form and in "as rendered" form.
I work for ByteScout, maker of PDF Extractor SDK
Maybe not directly answer the question but another option to extract images from PDF is to use FreeSpire.PDF which can extract the image from pdf easily. It is available as Nuget package https://www.nuget.org/packages/FreeSpire.PDF/. They handle all the image format and can export as PNG. Their sample code is
using System;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
using Spire.Pdf;
namespace ExtractImagesFromPDF
{
class Program
{
static void Main(string[] args)
{
//Instantiate an object of Spire.Pdf.PdfDocument
PdfDocument doc = new PdfDocument();
//Load a PDF file
doc.LoadFromFile("sample.pdf");
List<Image> ListImage = new List<Image>();
for (int i = 0; i < doc.Pages.Count; i++)
{
// Get an object of Spire.Pdf.PdfPageBase
PdfPageBase page = doc.Pages[i];
// Extract images from Spire.Pdf.PdfPageBase
Image[] images = page.ExtractImages();
if (images != null && images.Length > 0)
{
ListImage.AddRange(images);
}
}
if (ListImage.Count > 0)
{
for (int i = 0; i < ListImage.Count; i++)
{
Image image = ListImage[i];
image.Save("image" + (i + 1).ToString() + ".png", System.Drawing.Imaging.ImageFormat.Png);
}
System.Diagnostics.Process.Start("image1.png");
}
}
}
}
(code taken from https://www.e-iceblue.com/Tutorials/Spire.PDF/Spire.PDF-Program-Guide/How-to-Extract-Image-From-PDF-in-C.html)

Using WritePixels when using a writeable bitmap from a intptr to create a bitmap.

Im currently trying to use writeablebitmap to take a IntPtr of a scan of images and turn each one into a Bitmap. Im wanting to use writeablebitmap because im having an issue with standard gdi
GDI+ System.Drawing.Bitmap gives error Parameter is not valid intermittently
There is a method on a WriteableBitmap that called WritePixels
http://msdn.microsoft.com/en-us/library/aa346817.aspx
Im not sure what I set for the buffer and the stride every example I find it shows the stride as 0 although that throws an error. When I set the stride to 5 the image appear black. I know this may not be the most efficient code but any help would be appreciated.
//create bitmap header
bmi = new BITMAPINFOHEADER();
//create initial rectangle
Int32Rect rect = new Int32Rect(0, 0, 0, 0);
//create duplicate intptr to use while in global lock
dibhand = dibhandp;
bmpptr = GlobalLock(dibhand);
//get the pixel sizes
pixptr = GetPixelInfo(bmpptr);
//create writeable bitmap
var wbitm = new WriteableBitmap(bmprect.Width, bmprect.Height, 96.0, 96.0, System.Windows.Media.PixelFormats.Bgr32, null);
//draw the image
wbitm.WritePixels(rect, dibhandp, 10, 0);
//convert the writeable bitmap to bitmap
var stream = new MemoryStream();
var encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(wbitm));
encoder.Save(stream);
byte[] buffer = stream.GetBuffer();
var bitmap = new System.Drawing.Bitmap(new MemoryStream(buffer));
GlobalUnlock(dibhand);
GlobalFree(dibhand);
GlobalFree(dibhandp);
GlobalFree(bmpptr);
dibhand = IntPtr.Zero;
return bitmap;
An efficient way to work on Bitmaps in C# is to pass temporarily in unsafe mode (I know I don't answer the question exactly but I think the OP did not manage to use Bitmap, so this could be a solution anyway). You just have to lock bits and you're done:
unsafe private void GaussianFilter()
{
// Working images
using (Bitmap newImage = new Bitmap(width, height))
{
// Lock bits for performance reason
BitmapData newImageData = newImage.LockBits(new Rectangle(0, 0, newImage.Width,
newImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte* pointer = (byte*)newImageData.Scan0;
int offset = newImageData.Stride - newImageData.Width * 4;
// Compute gaussian filter on temp image
for (int j = 0; j < InputData.Height - 1; ++j)
{
for (int 0 = 1; i < InputData.Width - 1; ++i)
{
// You browse 4 bytes per 4 bytes
// The 4 bytes are: B G R A
byte blue = pointer[0];
byte green = pointer[1];
byte red = pointer[2];
byte alpha = pointer[3];
// Your business here by setting pointer[i] = ...
// If you don't use alpha don't forget to set it to 255 else your whole image will be black !!
// Go to next pixels
pointer += 4;
}
// Go to next line: do not forget pixel at last and first column
pointer += offset;
}
// Unlock image
newImage.UnlockBits(newImageData);
newImage.Save("D:\temp\OCR_gray_gaussian.tif");
}
}
This is really much more efficient than SetPixel(i, j), you just have to be careful about pointer limits (and not forget to unlock data when you're done).
Now to answer your question about stride: the stride is the length in bytes of a line, it is a multiple of 4. In my exemple I use the format Format32bppArgb which uses 4 bytes per pixel (R, G, B and alpha), so newImageData.Stride and newImageData.Width * 4 are always the same. I use the offset in my loops only to show where it would be necessary.
But if you use another format, for instance Format24bppRgb which uses 3 bytes per pixel (R, G and B only), then there may be an offset between stride and width. For an image 10 * 10 pixels in this format, you will have a stride of 10 * 3 = 30, + 2 to reach nearest multiple of 4, i.e. 32.

Create thumbnail image

I want to display thumbnail image in a gridview from file location. How to generate that of .jpeg file?
I am using C# language with asp.net.
You have to use GetThumbnailImage method in the Image class:
https://msdn.microsoft.com/en-us/library/8t23aykb%28v=vs.110%29.aspx
Here's a rough example that takes an image file and makes a thumbnail image from it, then saves it back to disk.
Image image = Image.FromFile(fileName);
Image thumb = image.GetThumbnailImage(120, 120, ()=>false, IntPtr.Zero);
thumb.Save(Path.ChangeExtension(fileName, "thumb"));
It is in the System.Drawing namespace (in System.Drawing.dll).
Behavior:
If the Image contains an embedded thumbnail image, this method
retrieves the embedded thumbnail and scales it to the requested size.
If the Image does not contain an embedded thumbnail image, this method
creates a thumbnail image by scaling the main image.
Important: the remarks section of the Microsoft link above warns of certain potential problems:
The GetThumbnailImage method works well when the requested thumbnail
image has a size of about 120 x 120 pixels. If you request a large
thumbnail image (for example, 300 x 300) from an Image that has an
embedded thumbnail, there could be a noticeable loss of quality in the
thumbnail image.
It might be better to scale the main image (instead
of scaling the embedded thumbnail) by calling the DrawImage method.
The following code will write an image in proportional to the response, you can modify the code for your purpose:
public void WriteImage(string path, int width, int height)
{
Bitmap srcBmp = new Bitmap(path);
float ratio = srcBmp.Width / srcBmp.Height;
SizeF newSize = new SizeF(width, height * ratio);
Bitmap target = new Bitmap((int) newSize.Width,(int) newSize.Height);
HttpContext.Response.Clear();
HttpContext.Response.ContentType = "image/jpeg";
using (Graphics graphics = Graphics.FromImage(target))
{
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.DrawImage(srcBmp, 0, 0, newSize.Width, newSize.Height);
using (MemoryStream memoryStream = new MemoryStream())
{
target.Save(memoryStream, ImageFormat.Jpeg);
memoryStream.WriteTo(HttpContext.Response.OutputStream);
}
}
Response.End();
}
Here is a complete example of how to create a smaller image (thumbnail). This snippet resizes the Image, rotates it when needed (if a phone was held vertically) and pads the image if you want to create square thumbs. This snippet creates a JPEG, but it can easily be modified for other file types. Even if the image would be smaller than the max allowed size the image will still be compressed and it's resolution altered to create images of the same dpi and compression level.
using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.IO;
//set the resolution, 72 is usually good enough for displaying images on monitors
float imageResolution = 72;
//set the compression level. higher compression = better quality = bigger images
long compressionLevel = 80L;
public Image resizeImage(Image image, int maxWidth, int maxHeight, bool padImage)
{
int newWidth;
int newHeight;
//first we check if the image needs rotating (eg phone held vertical when taking a picture for example)
foreach (var prop in image.PropertyItems)
{
if (prop.Id == 0x0112)
{
int orientationValue = image.GetPropertyItem(prop.Id).Value[0];
RotateFlipType rotateFlipType = getRotateFlipType(orientationValue);
image.RotateFlip(rotateFlipType);
break;
}
}
//apply the padding to make a square image
if (padImage == true)
{
image = applyPaddingToImage(image, Color.Red);
}
//check if the with or height of the image exceeds the maximum specified, if so calculate the new dimensions
if (image.Width > maxWidth || image.Height > maxHeight)
{
double ratioX = (double)maxWidth / image.Width;
double ratioY = (double)maxHeight / image.Height;
double ratio = Math.Min(ratioX, ratioY);
newWidth = (int)(image.Width * ratio);
newHeight = (int)(image.Height * ratio);
}
else
{
newWidth = image.Width;
newHeight = image.Height;
}
//start the resize with a new image
Bitmap newImage = new Bitmap(newWidth, newHeight);
//set the new resolution
newImage.SetResolution(imageResolution, imageResolution);
//start the resizing
using (var graphics = Graphics.FromImage(newImage))
{
//set some encoding specs
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.DrawImage(image, 0, 0, newWidth, newHeight);
}
//save the image to a memorystream to apply the compression level
using (MemoryStream ms = new MemoryStream())
{
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, compressionLevel);
newImage.Save(ms, getEncoderInfo("image/jpeg"), encoderParameters);
//save the image as byte array here if you want the return type to be a Byte Array instead of Image
//byte[] imageAsByteArray = ms.ToArray();
}
//return the image
return newImage;
}
//=== image padding
public Image applyPaddingToImage(Image image, Color backColor)
{
//get the maximum size of the image dimensions
int maxSize = Math.Max(image.Height, image.Width);
Size squareSize = new Size(maxSize, maxSize);
//create a new square image
Bitmap squareImage = new Bitmap(squareSize.Width, squareSize.Height);
using (Graphics graphics = Graphics.FromImage(squareImage))
{
//fill the new square with a color
graphics.FillRectangle(new SolidBrush(backColor), 0, 0, squareSize.Width, squareSize.Height);
//put the original image on top of the new square
graphics.DrawImage(image, (squareSize.Width / 2) - (image.Width / 2), (squareSize.Height / 2) - (image.Height / 2), image.Width, image.Height);
}
//return the image
return squareImage;
}
//=== get encoder info
private ImageCodecInfo getEncoderInfo(string mimeType)
{
ImageCodecInfo[] encoders = ImageCodecInfo.GetImageEncoders();
for (int j = 0; j < encoders.Length; ++j)
{
if (encoders[j].MimeType.ToLower() == mimeType.ToLower())
{
return encoders[j];
}
}
return null;
}
//=== determine image rotation
private RotateFlipType getRotateFlipType(int rotateValue)
{
RotateFlipType flipType = RotateFlipType.RotateNoneFlipNone;
switch (rotateValue)
{
case 1:
flipType = RotateFlipType.RotateNoneFlipNone;
break;
case 2:
flipType = RotateFlipType.RotateNoneFlipX;
break;
case 3:
flipType = RotateFlipType.Rotate180FlipNone;
break;
case 4:
flipType = RotateFlipType.Rotate180FlipX;
break;
case 5:
flipType = RotateFlipType.Rotate90FlipX;
break;
case 6:
flipType = RotateFlipType.Rotate90FlipNone;
break;
case 7:
flipType = RotateFlipType.Rotate270FlipX;
break;
case 8:
flipType = RotateFlipType.Rotate270FlipNone;
break;
default:
flipType = RotateFlipType.RotateNoneFlipNone;
break;
}
return flipType;
}
//== convert image to base64
public string convertImageToBase64(Image image)
{
using (MemoryStream ms = new MemoryStream())
{
//convert the image to byte array
image.Save(ms, ImageFormat.Jpeg);
byte[] bin = ms.ToArray();
//convert byte array to base64 string
return Convert.ToBase64String(bin);
}
}
For the asp.net users a little example of how to upload a file, resize it and display the result on the page.
//== the button click method
protected void Button1_Click(object sender, EventArgs e)
{
//check if there is an actual file being uploaded
if (FileUpload1.HasFile == false)
{
return;
}
using (Bitmap bitmap = new Bitmap(FileUpload1.PostedFile.InputStream))
{
try
{
//start the resize
Image image = resizeImage(bitmap, 256, 256, true);
//to visualize the result, display as base64 image
Label1.Text = "<img src=\"data:image/jpg;base64," + convertImageToBase64(image) + "\">";
//save your image to file sytem, database etc here
}
catch (Exception ex)
{
Label1.Text = "Oops! There was an error when resizing the Image.<br>Error: " + ex.Message;
}
}
}
Here is a version based on the accepted answer. It fixes two problems...
Improper disposing of the images.
Maintaining the aspect ratio of the image.
I found this tool to be fast and effective for both JPG and PNG files.
private static FileInfo CreateThumbnailImage(string imageFileName, string thumbnailFileName)
{
const int thumbnailSize = 150;
using (var image = Image.FromFile(imageFileName))
{
var imageHeight = image.Height;
var imageWidth = image.Width;
if (imageHeight > imageWidth)
{
imageWidth = (int) (((float) imageWidth / (float) imageHeight) * thumbnailSize);
imageHeight = thumbnailSize;
}
else
{
imageHeight = (int) (((float) imageHeight / (float) imageWidth) * thumbnailSize);
imageWidth = thumbnailSize;
}
using (var thumb = image.GetThumbnailImage(imageWidth, imageHeight, () => false, IntPtr.Zero))
//Save off the new thumbnail
thumb.Save(thumbnailFileName);
}
return new FileInfo(thumbnailFileName);
}
Here is an example to convert high res image into thumbnail size-
protected void Button1_Click(object sender, EventArgs e)
{
//---------- Getting the Image File
System.Drawing.Image img = System.Drawing.Image.FromFile(Server.MapPath("~/profile/Avatar.jpg"));
//---------- Getting Size of Original Image
double imgHeight = img.Size.Height;
double imgWidth = img.Size.Width;
//---------- Getting Decreased Size
double x = imgWidth / 200;
int newWidth = Convert.ToInt32(imgWidth / x);
int newHeight = Convert.ToInt32(imgHeight / x);
//---------- Creating Small Image
System.Drawing.Image.GetThumbnailImageAbort myCallback = new System.Drawing.Image.GetThumbnailImageAbort(ThumbnailCallback);
System.Drawing.Image myThumbnail = img.GetThumbnailImage(newWidth, newHeight, myCallback, IntPtr.Zero);
//---------- Saving Image
myThumbnail.Save(Server.MapPath("~/profile/NewImage.jpg"));
}
public bool ThumbnailCallback()
{
return false;
}
Source-
http://iknowledgeboy.blogspot.in/2014/03/c-creating-thumbnail-of-large-image-by.html
This is the code I'm using. Also works for .NET Core > 2.0 using System.Drawing.Common NuGet.
https://www.nuget.org/packages/System.Drawing.Common/
using System;
using System.Drawing;
class Program
{
static void Main()
{
const string input = "C:\\background1.png";
const string output = "C:\\thumbnail.png";
// Load image.
Image image = Image.FromFile(input);
// Compute thumbnail size.
Size thumbnailSize = GetThumbnailSize(image);
// Get thumbnail.
Image thumbnail = image.GetThumbnailImage(thumbnailSize.Width,
thumbnailSize.Height, null, IntPtr.Zero);
// Save thumbnail.
thumbnail.Save(output);
}
static Size GetThumbnailSize(Image original)
{
// Maximum size of any dimension.
const int maxPixels = 40;
// Width and height.
int originalWidth = original.Width;
int originalHeight = original.Height;
// Return original size if image is smaller than maxPixels
if (originalWidth <= maxPixels || originalHeight <= maxPixels)
{
return new Size(originalWidth, originalHeight);
}
// Compute best factor to scale entire image based on larger dimension.
double factor;
if (originalWidth > originalHeight)
{
factor = (double)maxPixels / originalWidth;
}
else
{
factor = (double)maxPixels / originalHeight;
}
// Return thumbnail size.
return new Size((int)(originalWidth * factor), (int)(originalHeight * factor));
}
}
Source:
https://www.dotnetperls.com/getthumbnailimage

Categories

Resources