Using iTextSharp to add repeating data to an existing PDF? - c#

I am going to be using iTextSharp to insert data to a PDF that the Graphics department has created. Most of this data is simple data-to-field mapping, but some data is a list of items that needs to be added (e.g. product data; users can have any number of products and the data needs to be displayed for all of them).
Is it possible to do this with iTextSharp? The PDF template cannot, obviously, be created with a certain number of fields as there is no way of knowing how many fields there will be - it could be 1, or 10, or even 100; what I need to be able to do is "re-use" a section of the PDF and repeat that section for each item within a loop.
Is that doable?

In the past I needed to do something similar. I needed to create a PDF with an unknown number of images + content. In my case an 'Entry' was defined by an image and a set of fields.
What I did is I had a doc. that served as a 'Entry' template. I then generated a temp. pdf file for each 'Entry', and stored the generated file names in a List.
After all 'Entries' were processed I then merged all temporary pdf docs, into one final document.
Here is some code to give you a better idea (it's not compilable, just serves as a ref, as I took certain parts from my older project).
List<string> files = new List<string>(); // list of files to merge
foreach (string pageId in pages)
{
// create an intermediate page
string intermediatePdf = Path.Combine(_tempPath, System.Guid.NewGuid() + ".pdf");
files.Add(intermediatePdf);
string pdfTemplate = Path.Combine(_templatePath, _template);
CreatePage(pdfTemplate, intermediatePdf, pc, pageValues, imageMap, tmd);
}
// merge into resulting pdf file
string outputFolder = "~/Output/";
if (preview)
{
outputFolder = "~/temp/";
}
string pdfResult = Path.Combine(HttpContext.Current.Server.MapPath(outputFolder), Guid.NewGuid().ToString() + ".pdf");
PdfMerge.MergeFiles(pdfResult, files);
//////////////////////////////////////////////////////////////////////////
// delete temporary files...
foreach (string fd in files)
{
File.Delete(fd);
}
return pdfResult;
Here is the code to merge the templates:
public class PdfMerge
{
public static void MergeFiles(string destinationFile, List<string> sourceFiles)
{
int f = 0;
// we create a reader for a certain document
PdfReader reader = new PdfReader(sourceFiles[f]);
// we retrieve the total number of pages
int n = reader.NumberOfPages;
// step 1: creation of a document-object
Document document = new Document(reader.GetPageSizeWithRotation(1));
// step 2: we create a writer that listens to the document
PdfWriter writer = PdfWriter.GetInstance(document, new FileStream(destinationFile, FileMode.Create));
// step 3: we open the document
document.Open();
PdfContentByte cb = writer.DirectContent;
PdfImportedPage page;
int rotation;
// step 4: we add content
while (f < sourceFiles.Count)
{
int i = 0;
while (i < n)
{
i++;
document.SetPageSize(reader.GetPageSizeWithRotation(i));
document.NewPage();
page = writer.GetImportedPage(reader, i);
rotation = reader.GetPageRotation(i);
if (rotation == 90 || rotation == 270)
{
cb.AddTemplate(page, 0, -1f, 1f, 0, 0, reader.GetPageSizeWithRotation(i).Height);
}
else
{
cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0);
}
}
f++;
if (f < sourceFiles.Count)
{
reader = new PdfReader(sourceFiles[f]);
// we retrieve the total number of pages
n = reader.NumberOfPages;
}
}
// step 5: we close the document
document.Close();
}
}
Hope it helps!

Related

Merging multiple PDFs to a single very large PDF with PDFSharp without running out of memory

I'm using PDFsharp to merge a lot of files (stored on disk) into one PDF. Sometimes the PDF can be as large as 700MB. I'm using the sample code provided that basically creates an output PdfDocument, adds pages to it, and then calls outputDocument.Save(destinationPath), so the amount of memory used is about the same as the size of documents produced.
Is there a way to instead stream the changes to a file to avoid the memory consumption? If not, would there be a way to do it leveraging MigraDoc?
UPDATE
Based on suggestion in comment, I put together some code to close and re-open document, and while memory use is under control and the file does grow, it doesn't seem to be appending pages. If I make it so "paths" is a list of 3000 single page files, I still get a 500 page document. Here is the code:
var destinationFile = "c:\\test.pdf";
var directory = Path.GetDirectoryName(destinationFile);
if (!Directory.Exists(directory))
{
Directory.CreateDirectory(directory);
}
var fs = new FileStream(destinationFile, FileMode.OpenOrCreate, FileAccess.ReadWrite);
var outputDocument = new PdfDocument(fs);
var count = 0;
// Iterate files (paths is a List<string> collection)
foreach (string path in paths)
{
var inputDocument = PdfReader.Open(path, PdfDocumentOpenMode.Import);
// Iterate pages
for (int idx = 0; idx < inputDocument.PageCount; idx++)
{
// Get the page from the external document...
PdfPage page = inputDocument.Pages[idx];
// ...and add it to the output document.
outputDocument.AddPage(page);
}
inputDocument.Dispose();
count++;
if (count % 500 == 0 || count == paths.Count)
{
outputDocument.Close();
fs.Close();
fs.Dispose();
if (count < paths.Count)
{
fs = new FileStream(destinationFile, FileMode.Append, FileAccess.Write);
outputDocument = new PdfDocument(fs);
}
}
}
UPDATE 2
Here is some new code that closes and re-opens the document using PDFReader. Program is merging 2000 4 page 140KB PDFs, output file is 273MB. I tried it without closing and re-opening, closing and re-opening every 1000, 500, 250, and 100 files. Results were as follows:
No interval: 21 seconds, max memory 330MB
1000 interval: 30 seconds, max memory 490MB
500 interval: 55secs, max memory 710MB
250 interval: 1min 35sec, max memory 780MB
100 interval: 2min 55secs, max memory 850mb
class Program
{
public static void Main(string[] args)
{
var files = new List<string>();
var basePath = AppDomain.CurrentDomain.BaseDirectory;
for (var i = 0; i < 2000; i++)
{
files.Add($"{basePath}\\sample.pdf");
}
DoMerge(files, $"{basePath}\\output.pdf");
}
private static void DoMerge(List<string> paths, string destinationFile)
{
var directory = Path.GetDirectoryName(destinationFile);
if (!Directory.Exists(directory))
{
Directory.CreateDirectory(directory);
}
var outputDocument = new PdfDocument();
var count = 0;
// Iterate files
foreach (string path in paths)
{
// Open the document to import pages from it.
try
{
var inputDocument = PdfReader.Open(path, PdfDocumentOpenMode.Import);
// Iterate pages
for (int idx = 0; idx < inputDocument.PageCount; idx++)
{
// Get the page from the external document...
PdfPage page = inputDocument.Pages[idx];
// ...and add it to the output document.
outputDocument.AddPage(page);
}
inputDocument.Dispose();
count++;
if (count % 500 == 0 || count == paths.Count)
{
outputDocument.Save(destinationFile);
outputDocument.Close();
outputDocument.Dispose();
if (count < paths.Count)
{
outputDocument = PdfReader.Open(destinationFile, PdfDocumentOpenMode.Import);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
}
}
}
To reduce the memory footprint, you can close the destination file from time to time, then open it again, and append more PDF files to it.
PDFsharp does not support swapping data to a file.
Make sure your app runs in 64-bit mode to allow it to use more than 2 GiB of RAM.

Getting PDF page length

In my articles which formatted PDF, one or more pages may be blanked and I want to detect them and remove from PDF file. If I can identify pages that are less than 60 KB, I think I can detect the pages that are empty. Because they're probably empty.
I tried like this:
var reader = new PdfReader("D:\\_test\\file.pdf");
/*
* With reader.FileLength, I can get whole pdf file size.
* But I dont know, how can I get pages'sizes...
*/
for (var i = 1; i <= reader.NumberOfPages; i++)
{
/*
* MessageBox.Show(???);
*/
}
I would do this in 2 steps:
first go over the document using IEventListener to detect which pages are empty
once you've determined which pages are empty, simply create a new document by copying the non-empty pages from the source document into the new document
step 1:
List<Integer> emptyPages = new ArrayList<>();
PdfDocument pdfDocument = new PdfDocument(new PdfReader(new File(SRC)));
for(int i=1;i<pdfDocument.getNumberOfPages();i++){
IsEmptyEventListener l = new IsEmptyEventListener();
new PdfCanvasProcessor(l).processPageContent(pdfDocument.getPage(i));
if(l.isEmptyPage()){
emptyPages.add(i);
}
}
Then you need the proper implementation of IsEmptyEventListener. Which may be tricky and depend on your specific document(s). This is a demo.
class IsEmptyEventListener implements IEventListener {
private int eventCount = 0;
public void eventOccurred(IEventData data, EventType type){
// perhaps count only text rendering events?
eventCount++;
}
public boolean isEmptyPage(){ return eventCount < 32; }
}
step 2:
Based on this example: https://developers.itextpdf.com/examples/stamping-content-existing-pdfs/clone-reordering-pages
void copyNonBlankPages(List<Integer> blankPages, PdfDocument src, PdfDocument dst){
int N = src.getNumberOfPages();
List<Integer> toCopy = new ArrayList<>();
for(int i=1;i<N;i++){
if(!blankPages.contains(i)){
toCopy.add(i);
}
}
src.copyPagesTo(toCopy, dst);
}

Compression of Splited PDF Files

How to compress a sliced pdf documents in c#..??
i have a pdf document. i am slicing that document. if the orginal pdf document size 10 mb after slicing size is increasing to 15 mb. thats why i have to compress the sliced document. is any way to compress..?? please help me..
public int ExtractPages(string sourcePdfPath, string DestinationFolder)
{
int p = 0, initialcount = 0;
try
{
iTextSharp.text.Document document;
iTextSharp.text.pdf.PdfReader reader = new iTextSharp.text.pdf.PdfReader(new iTextSharp.text.pdf.RandomAccessFileOrArray(sourcePdfPath), new ASCIIEncoding().GetBytes(""));
if (!Directory.Exists(DestinationFolder))
{
Directory.CreateDirectory(DestinationFolder);
}
else
{
DirectoryInfo di = new DirectoryInfo(DestinationFolder);
initialcount = di.GetFiles("*.pdf", SearchOption.AllDirectories).Length;
}
for (p = 1; p <= reader.NumberOfPages; p++)
{
using (MemoryStream memoryStream = new MemoryStream())
{
document = new iTextSharp.text.Document();
iTextSharp.text.pdf.PdfWriter writer = iTextSharp.text.pdf.PdfWriter.GetInstance(document, memoryStream);
writer.SetPdfVersion(iTextSharp.text.pdf.PdfWriter.PDF_VERSION_1_2);
writer.CompressionLevel = iTextSharp.text.pdf.PdfStream.BEST_COMPRESSION;
writer.SetFullCompression();
document.SetPageSize(reader.GetPageSize(p));
document.NewPage();
document.Open();
document.AddDocListener(writer);
iTextSharp.text.pdf.PdfContentByte cb = writer.DirectContent;
iTextSharp.text.pdf.PdfImportedPage pageImport = writer.GetImportedPage(reader, p);
int rot = reader.GetPageRotation(p);
if (rot == 90 || rot == 270)
{
cb.AddTemplate(pageImport, 0, -1.0F, 1.0F, 0, 0, reader.GetPageSizeWithRotation(p).Height);
}
else
{
cb.AddTemplate(pageImport, 1.0F, 0, 0, 1.0F, 0, 0);
}
document.Close();
document.Dispose();
File.WriteAllBytes(DestinationFolder + "/" + p + ".pdf", memoryStream.ToArray());
}
}
reader.Close();
reader.Dispose();
}
catch
{
}
finally
{
GC.Collect();
}
if (initialcount > (p - 1))
{
for (int k = (p - 1) + 1; k <= initialcount; k++)
{
try
{
File.Delete(DestinationFolder + "/" + k + ".pdf");
}
catch
{
}
}
}
return p - 1;
}
First of all you should not use PdfWriter with GetImportedPage and its direct content with AddTemplate for a task like that at hand. Instead have a look at the Webified iTextSharp Examples of iText in Action — 2nd Edition.
There you'll find the sample Burst.cs with the central code
PdfReader reader = new PdfReader(pdf);
// loop over all the pages in the original PDF
int n = reader.NumberOfPages;
for (int i = 0; i < n; i++)
{
using (MemoryStream ms = new MemoryStream())
{
// We'll create as many new PDFs as there are pages
using (Document document = new Document())
{
using (PdfCopy copy = new PdfCopy(document, ms))
{
document.Open();
copy.AddPage(copy.GetImportedPage(reader, i + 1));
}
}
// store ms.ToArray() somewhere
}
}
(I removed some ZIP file packing those webified samples use.)
As you see, no need anymore to deal with page rotations or anything.
Now this all being said, the sum of the sizes of the individual files will very likely be larger than the size of the original file. After all, in the original file resources could be shared. E,g, a font used on all pages only needed to be embedded once while in the split documents the font has to be embedded in each individual document with a page on which that font is used.
PS: If keeping meta information is important, you might want to use PdfReader.selectPages and PdfStamper instead. For this I only have Java code:
for (int i = 1; i <= TEST_FILE_PAGES; i++)
{
FileOutputStream fos = new FileOutputStream(String.format("%03d.pdf", i));
PdfReader reader = new PdfReader(TEST_FILE);
reader.selectPages(Collections.singletonList(i));
PdfStamper stamper = new PdfStamper(reader, fos);
stamper.close();
fos.close();
}
This keeps the PDF meta information and, therefore, might be more apropos depending on your requirements. It is much slower, though, as for each page export the PdfReader contents are manipulated and, therefore, have to be re-read for exporting the next page.

Using itextsharp to split a pdf into smaller pdf's based on size

So we have some really inefficient code that splits a pdf into smaller chunks based on a maximum size allowed. Aka. if the max size is 10megs, an 8 meg file would be skipped, while a 16 meg file would be split based on the number of pages.
This is code that I inherited and feel like there has got to be a more efficient way to do this that requiring only one method and less instantiation of objects.
We use the following code to call the methods:
List<int> splitPoints = null;
List<byte[]> documents = null;
splitPoints = this.GetPDFSplitPoints(currentDocument, maxSize);
documents = this.SplitPDF(currentDocument, maxSize, splitPoints);
Methods:
private List<int> GetPDFSplitPoints(IClaimDocument currentDocument, int maxSize)
{
List<int> splitPoints = new List<int>();
PdfReader reader = null;
Document document = null;
int pagesRemaining = currentDocument.Pages;
while (pagesRemaining > 0)
{
reader = new PdfReader(currentDocument.Data);
document = new Document(reader.GetPageSizeWithRotation(1));
using (MemoryStream ms = new MemoryStream())
{
PdfCopy copy = new PdfCopy(document, ms);
PdfImportedPage page = null;
document.Open();
//Add pages until we run out from the original
for (int i = 0; i < currentDocument.Pages; i++)
{
int currentPage = currentDocument.Pages - (pagesRemaining - 1);
if (pagesRemaining == 0)
{
//The whole document has bee traversed
break;
}
page = copy.GetImportedPage(reader, currentPage);
copy.AddPage(page);
//If the current collection of pages exceeds the maximum size, we save off the index and start again
if (copy.CurrentDocumentSize > maxSize)
{
if (i == 0)
{
//One page is greater than the maximum size
throw new Exception("one page is greater than the maximum size and cannot be processed");
}
//We have gone one page too far, save this split index
splitPoints.Add(currentDocument.Pages - (pagesRemaining - 1));
break;
}
else
{
pagesRemaining--;
}
}
page = null;
document.Close();
document.Dispose();
copy.Close();
copy.Dispose();
copy = null;
}
}
if (reader != null)
{
reader.Close();
reader = null;
}
document = null;
return splitPoints;
}
private List<byte[]> SplitPDF(IClaimDocument currentDocument, int maxSize, List<int> splitPoints)
{
var documents = new List<byte[]>();
PdfReader reader = null;
Document document = null;
MemoryStream fs = null;
int pagesRemaining = currentDocument.Pages;
while (pagesRemaining > 0)
{
reader = new PdfReader(currentDocument.Data);
document = new Document(reader.GetPageSizeWithRotation(1));
fs = new MemoryStream();
PdfCopy copy = new PdfCopy(document, fs);
PdfImportedPage page = null;
document.Open();
//Add pages until we run out from the original
for (int i = 0; i <= currentDocument.Pages; i++)
{
int currentPage = currentDocument.Pages - (pagesRemaining - 1);
if (pagesRemaining == 0)
{
//We have traversed all pages
//The call to copy.Close() MUST come before using fs.ToArray() because copy.Close() finalizes the document
fs.Flush();
copy.Close();
documents.Add(fs.ToArray());
document.Close();
fs.Dispose();
break;
}
page = copy.GetImportedPage(reader, currentPage);
copy.AddPage(page);
pagesRemaining--;
if (splitPoints.Contains(currentPage + 1) == true)
{
//Need to start a new document
//The call to copy.Close() MUST come before using fs.ToArray() because copy.Close() finalizes the document
fs.Flush();
copy.Close();
documents.Add(fs.ToArray());
document.Close();
fs.Dispose();
break;
}
}
copy = null;
page = null;
fs.Dispose();
}
if (reader != null)
{
reader.Close();
reader = null;
}
if (document != null)
{
document.Close();
document.Dispose();
document = null;
}
if (fs != null)
{
fs.Close();
fs.Dispose();
fs = null;
}
return documents;
}
As far as I can tell, the only code online that I can see is VB and doesn't necessarily address the size issue.
UPDATE:
We're experiencing OutofMemory exceptions and I believe it's an issue with the Large Object Heap. So one thought was to reduce the code footprint and that would possibly reduce the number of large objects on the heap.
Basically this is part of a loop that goes through any number of PDF's, and then splits them and stores them in the database. Right now, we had to change the method from doing all of them at once (last run was 97 pdf's of various sizes), to running 5 pdf's through the system every 5 minutes. This is not ideal and won't scale well when we ramp up the tool to more clients.
(we're dealing with 50 -100 meg pdf's, but they could be larger).
I also inherited this exact code, and there appears to be a major flaw in it. In the GetPDFSplitPoints method, it's checking the total size of the copied pages against maxsize to determine at which page to split the file.
In the SplitPDF method, when it reaches the page where the split occurs, sure enough the MemoryStream at that point is below the maximum size allowed, and one more page would put it over the limit. But after document.Close(); is executed, much more is added to the MemoryStream (in one example PDF I worked with, the Length of the MemoryStream went from 9 MB to 19 MB before and after the document.Close). My understanding is that all the necessary resources for the copied pages are added upon Close.
I'm guessing I'll have to rewrite this code completely to ensure I don't exceed the max size while retaining the integrity of the original pages.

How to add a blank page to a pdf using iTextSharp?

I am trying to do something I thought would be quite simple, however it is not so straight forward and google has not helped.
I am using iTextSharp to merge PDF documents (letters) together so they can all be printed at once. If a letter has an odd number of pages I need to append a blank page, so we can print the letters double-sided.
Here is the basic code I have at the moment for merging all of the letters:
// initiaise
MemoryStream pdfStreamOut = new MemoryStream();
Document document = null;
MemoryStream pdfStreamIn = null;
PdfReader reader = null;
int numPages = 0;
PdfWriter writer = null;
for int(i = 0;i < letterList.Count; i++)
{
byte[] myLetterData = ...;
pdfStreamIn = new MemoryStream(myLetterData);
reader = new PdfReader(pdfStreamIn);
numPages = reader.NumberOfPages;
// open the streams to use for the iteration
if (i == 0)
{
document = new Document(reader.GetPageSizeWithRotation(1));
writer = PdfWriter.GetInstance(document, pdfStreamOut);
document.Open();
}
PdfContentByte cb = writer.DirectContent;
PdfImportedPage page;
int importedPageNumber = 0;
while (importedPageNumber < numPages)
{
importedPageNumber++;
document.SetPageSize(reader.GetPageSizeWithRotation(importedPageNumber));
document.NewPage();
page = writer.GetImportedPage(reader, importedPageNumber);
cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0);
}
}
I have tried using:
document.SetPageSize(reader.GetPageSizeWithRotation(1));
document.NewPage();
at the end of the for loop for an odd number of pages without success.
Well I was almost there. The document won't actually create the page until you put something on it, so as soon as I added an empty table, bam! It worked!
Here is the code that will add a blank page if the document I am merging has an odd number of pages:
if (numPages > 0 && numPages % 2 == 1)
{
bool result = document.NewPage();
document.Add(new Table(1));
}
If this doesn't work in newer versions, try this instead:
document.Add(new Chunk());
Another alternative that works successfully.
if (numPaginas % 2 != 0)
{
documentoPdfUnico.SetPageSize(leitorPdf.GetPageSizeWithRotation(1));
documentoPdfUnico.NewPage();
conteudoPdf.AddTemplate(PdfTemplate.CreateTemplate(escritorPdf, 2480, 3508), 1f, 0, 0, 1f, 0, 0);
}

Categories

Resources