I am using Spire.doc for creating a Word file, and I followed their example like this
public class WordController : Controller
{
public void Download()
{
Document doc = new Document();
Paragraph test = doc.AddSection().AddParagraph();
test.AppendText("This is a test");
doc.SaveToFile("Doc.doc");
try
{
System.Diagnostics.Process.Start("Doc.doc");
}catch(Exception)
{
}
}
}
This opens the Word file in Microsoft Word, but how can I make it so that it's downloaded instead?
I've used return File() to return a PDF document to the View before, but it doesn't work with this.
Could you please try the below code and let me know if it worked or not, cos I didn't executed this code but believe this should work, I modified my existing working code according to your requirement-
public class WordController : Controller
{
public void Download()
{
byte[] toArray = null;
Document doc = new Document();
Paragraph test = doc.AddSection().AddParagraph();
test.AppendText("This is a test");
using (MemoryStream ms1 = new MemoryStream())
{
doc.SaveToStream(ms1, FileFormat.Doc);
//save to byte array
toArray = ms1.ToArray();
}
//Write it back to the client
Response.ContentType = "application/msword";
Response.AddHeader("content-disposition", "attachment; filename=Doc.doc");
Response.BinaryWrite(toArray);
Response.Flush();
Response.End();
}
}
Load file .docx to richtextbox.rtf ( using Spire.Doc ):
byte[] toArray = null;
//Paragraph test = doc.AddSection().AddParagraph();
//test.AppendText("This is a test") --> this also works ;
Document doc = new Document();
doc.LoadFromFile("C://Users//Mini//Desktop//doc.docx");
// or - Document doc = new Document("C://Users//Mini//Desktop//doc.docx");
using (MemoryStream ms1 = new MemoryStream())
{
doc.SaveToStream(ms1, FileFormat.Rtf);
toArray = ms1.ToArray();
richTextBox1.Rtf = System.Text.Encoding.UTF8.GetString(toArray);
}
Related
I have an issue with trying to create a large PDF file. Basically I have a list of byte arrays, each containing a PDF in a form of a byte array. I wanted to merge the byte arrays into a single PDF. This works great for smaller files (under 2000 pages), but when I tried creating a 12,00 page file it bombed). Originally I was using MemoryStream but after some research, a common solution was to use a FileStream instead. So I tried a file stream approach, however get similar results. The List contains 3,800 records, each containing 4 pages. MemoryStream bombs after around 570. FileStream after about 680 records. The current file size after the code crashed was 60MB. What am I doing wrong? Here is the code I have, and the code crashes on "copy.AddPage(curPg);" directive, inside the "for(" loop.
private byte[] MergePDFs(List<byte[]> PDFs)
{
iTextSharp.text.Document doc = new iTextSharp.text.Document();
byte[] completePDF;
Guid uniqueId = Guid.NewGuid();
string tempFileName = Server.MapPath("~/" + uniqueId.ToString() + ".pdf");
//using (MemoryStream ms = new MemoryStream())
using(FileStream ms = new FileStream(tempFileName, FileMode.Create, FileAccess.Write, FileShare.Read))
{
iTextSharp.text.pdf.PdfCopy copy = new iTextSharp.text.pdf.PdfCopy(doc, ms);
doc.Open();
int i = 0;
foreach (byte[] PDF in PDFs)
{
i++;
// Create a reader
iTextSharp.text.pdf.PdfReader reader = new iTextSharp.text.pdf.PdfReader(PDF);
// Cycle through all the pages
for (int currentPageNumber = 1; currentPageNumber <= reader.NumberOfPages; ++currentPageNumber)
{
// Read a page
iTextSharp.text.pdf.PdfImportedPage curPg = copy.GetImportedPage(reader, currentPageNumber);
// Add the page over to the rest of them
copy.AddPage(curPg);
}
// Close the reader
reader.Close();
}
// Close the document
doc.Close();
// Close the copier
copy.Close();
// Convert the memorystream to a byte array
//completePDF = ms.ToArray();
}
//return completePDF;
return GetPDFsByteArray(tempFileName);
}
A couple of notes:
PdfCopy implements iDisposable, so you should try and see if a using helps.
PdfCopy.FreeReader() will help.
Anyway, not sure if you're using MVC or WebForms, but here's a simple working HTTP handler tested with a 15 page 125KB test file that runs on my workstation:
<%# WebHandler Language="C#" Class="MergeFiles" %>
using System;
using System.Collections.Generic;
using System.Web;
using System.IO;
using iTextSharp.text;
using iTextSharp.text.pdf;
public class MergeFiles : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
List<byte[]> pdfs = new List<byte[]>();
var pdf = File.ReadAllBytes(context.Server.MapPath("~/app_data/test.pdf"));
for (int i = 0; i < 4000; ++i) pdfs.Add(pdf);
var Response = context.Response;
Response.ContentType = "application/pdf";
Response.AddHeader(
"content-disposition",
"attachment; filename=MergeLotsOfPdfs.pdf"
);
Response.BinaryWrite(MergeLotsOfPdfs(pdfs));
}
byte[] MergeLotsOfPdfs(List<byte[]> pdfs)
{
using (var ms = new MemoryStream())
{
using (Document document = new Document())
{
using (PdfCopy copy = new PdfCopy(document, ms))
{
document.Open();
for (int i = 0; i < pdfs.Count; ++i)
{
using (PdfReader reader = new PdfReader(
new RandomAccessFileOrArray(pdfs[i]), null))
{
copy.AddDocument(reader);
copy.FreeReader(reader);
}
}
}
}
return ms.ToArray();
}
}
public bool IsReusable { get { return false; } }
}
Tried to make the output file similar to what you described in the question, but YMMV, depending on how large the individual PDFs you're dealing with are in size. Here's the test output from my run:
So after a lot of messing around, I realized that there just was no way around it. However, I did manage to find a work-around. Instead of returning byte array, I return a temp file path, which I then transmit and delete there after.
private string MergeLotsOfPDFs(List<byte[]> PDFs)
{
Document doc = new Document();
Guid uniqueId = Guid.NewGuid();
string tempFileName = Server.MapPath("~/__" + uniqueId.ToString() + ".pdf");
using (FileStream ms = new FileStream(tempFileName, FileMode.Create, FileAccess.Write, FileShare.Read))
{
PdfCopy copy = new PdfCopy(doc, ms);
doc.Open();
int i = 0;
foreach (byte[] PDF in PDFs)
{
i++;
// Create a reader
PdfReader reader = new PdfReader(new RandomAccessFileOrArray(PDF), null);
// Cycle through all the pages
for (int currentPageNumber = 1; currentPageNumber <= reader.NumberOfPages; ++currentPageNumber)
{
// Read a page
PdfImportedPage curPg = copy.GetImportedPage(reader, currentPageNumber);
// Add the page over to the rest of them
copy.AddPage(curPg);
// This is a lie, it still costs money, hue hue hue :)~
copy.FreeReader(reader);
}
reader.Close();
}
// Close the document
doc.Close();
// Close the document
copy.Close();
}
// Return temp file path
return tempFileName;
}
And here is how I send that data to the client.
// Send the merged PDF file to the user.
System.Web.HttpResponse response = System.Web.HttpContext.Current.Response;
response.ClearContent();
Response.ClearHeaders();
response.ContentType = "application/pdf";
response.AddHeader("Content-Disposition", "attachment; filename=1094C.pdf;");
response.WriteFile(tempFileName);
HttpContext.Current.Response.Flush(); // Sends all currently buffered output to the client.
DeleteFile(tempFileName); // Call right after flush but before close
HttpContext.Current.Response.SuppressContent = true; // Gets or sets a value indicating whether to send HTTP content to the client.
HttpContext.Current.ApplicationInstance.CompleteRequest(); // Causes ASP.NET to bypass all events and filtering in the HTTP pipeline chain of execution and directly execute the EndRequest event.
Lastly, here is a fancy DeleteFile method
private void DeleteFile(string fileName)
{
if (File.Exists(fileName))
{
try
{
File.Delete(fileName);
}
catch (Exception ex)
{
//Could not delete the file, wait and try again
try
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
File.Delete(fileName);
}
catch
{
//Could not delete the file still
}
}
}
}
I've built an ActionResult to output data to a Word Document. I get no errors at compile or runtime, but when trying to open the file I get the message: 'We're sorry, We can't open filename.docx because we found a problem with its contents.'.
Here's what I'm trying to do:
public override void ExecuteResult(ControllerContext context)
{
//Create a response stream to create and write the Excel file
HttpContext curContext = HttpContext.Current;
curContext.Response.Clear();
curContext.Response.AddHeader("content-disposition", "attachment;filename=text.docx");
curContext.Response.Charset = "";
curContext.Response.Cache.SetCacheability(HttpCacheability.NoCache);
curContext.Response.ContentType = "application/vnd.ms-word";
//Write the stream back to the response
var ms = new MemoryStream();
var repData = "<b>Mark's Test Book: With a Special Sub Title</b><br /><br /><b>Chapter: Chapter Title 1: Chapter Title sub</b><br /><br />";
Document.CreateAndAddHtmlToWordprocessingStream(ms, repData);
curContext.Response.OutputStream.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length);
curContext.Response.End();
}
The static method is as follows:
public static void CreateAndAddHtmlToWordprocessingStream(Stream stream, string inputBody)
{
// Open a WordProcessingDocument based on a stream.
WordprocessingDocument wordprocessingDocument =
WordprocessingDocument.Create(stream, WordprocessingDocumentType.Document);
// Add a main document part.
MainDocumentPart mainPart = wordprocessingDocument.AddMainDocumentPart();
// Create the document structure.
mainPart.Document = new DocumentFormat.OpenXml.Wordprocessing.Document();
// Create the document body.
mainPart.Document.AppendChild(new Body());
var ms = new MemoryStream(System.Text.Encoding.Default.GetBytes("<html><head></head><body style=\"font-family:'Calibri';\">" + inputBody + "</body></html>"));
var altChunkId = "id";
var formatImportPart = mainPart.AddAlternativeFormatImportPart(AlternativeFormatImportPartType.Html, altChunkId);
formatImportPart.FeedData(ms);
var altChunk = new AltChunk { Id = altChunkId };
mainPart.Document.Body.Append(altChunk);
mainPart.Document.Save();
// Close the document handle.
wordprocessingDocument.Close();
// Caller must close the stream.
}
I've looked at these two posts, but didn't find anything that helped:
C# return memory stream from OpenXML resulting to a corrupted word file
Streaming In Memory Word Document using OpenXML SDK w/ASP.NET results in "corrupt" document
ms.GetBuffer() will return the automatically managed and sized buffer. This will start with the data you have written, but may contain extra \0 bytes at the end in the eventuality you continue to .Write().
To return a MemoryStream, you can use either of the following:
ms.Position = 0;
ms.CopyTo(curContext.Response.OutputStream);
or:
var msResult = ms.ToArray();
curContext.Response.OutputStream.Write(msResult, 0, msResult.Length);
you could create a method like this to handle the memory stream and the filename formatting
private static void DynaGenWordDoc(string fileName, Page page, WordprocessingDocument wdoc)
{
page.Response.ClearContent();
page.Response.ClearHeaders();
page.Response.ContentType = "application/vnd.ms-word";
page.Response.AppendHeader("Content-Disposition", string.Format("attachment;filename={0}.docx", fileName));
using (MemoryStream memoryStream = new MemoryStream())
{
wdoc.SaveAs(memoryStream);
memoryStream.WriteTo(page.Response.OutputStream);
memoryStream.Close();
}
page.Response.Flush();
page.Response.End();
}
We have a system that stores some custom templating data in a Word document. Sometimes, updating this data causes Word to complain that the document is corrupted. When that happens, if I unzip the docx file and compare the contents to the previous version, the only difference appears to be the expected change in the customXML\item.xml file. If I re-zip the contents using 7zip, it seems to work OK (Word no longer complains that the document is corrupt).
The (simplified) code:
void CreateOrReplaceCustomXml(string filename, MyCustomData data)
{
using (var doc = WordProcessingDocument.Open(filename, true))
{
var part = GetCustomXmlParts(doc).SingleOrDefault();
if (part == null)
{
part = doc.MainDocumentPart.AddCustomXmlPart(CustomXmlPartType.CustomXml);
}
var serializer = new DataContractSerializer(typeof(MyCustomData));
using (var stream = new MemoryStream())
{
serializer.WriteObject(stream, data);
stream.Seek(0, SeekOrigin.Begin);
part.FeedData(stream);
}
}
}
IEnumerable<CustomXmlPart> GetCustomXmlParts(WordProcessingDocument doc)
{
return doc.MainDocumentPart.CustomXmlParts
.Where(part =>
{
using (var stream = doc.Package.GePart(c.Uri).GetStream())
using (var streamReader = new StreamReader(stream))
{
return streamReader.ReadToEnd().Contains("Some.Namespace");
}
});
}
Any suggestions?
Since re-zipping works, it seems the content is well-formed.
So it sounds like the zip process is at fault. So open the corrupted docx in 7-Zip, and take note of the values in the "method" column (especially for customXML\item.xml).
Compare that value to a working docx - is it the same or different? Method "Deflate" works.
I faced the same issue and it turned out it was due to encoding.
Do you already specify the same encoding when serializing/deserializing?
Couple of suggestion
a. Try doc.Package.Flush(); after you write the data back into the custom xml.
b. You may have to delete all custom part and add a new custom part. We are using the following code and it seems working fine.
public static void ReplaceCustomXML(WordprocessingDocument myDoc, string customXML)
{
MainDocumentPart mainPart = myDoc.MainDocumentPart;
mainPart.DeleteParts<CustomXmlPart>(mainPart.CustomXmlParts);
CustomXmlPart customXmlPart = mainPart.AddCustomXmlPart(CustomXmlPartType.CustomXml);
using (StreamWriter ts = new StreamWriter(customXmlPart.GetStream()))
{
ts.Write(customXML);
ts.Flush();
ts.Close();
}
}
public static MemoryStream GetCustomXmlPart(MainDocumentPart mainPart)
{
foreach (CustomXmlPart part in mainPart.CustomXmlParts)
{
using (XmlTextReader reader =
new XmlTextReader(part.GetStream(FileMode.Open, FileAccess.Read)))
{
reader.MoveToContent();
if (reader.Name.Equals("aaaa", StringComparison.OrdinalIgnoreCase))
{
string str = reader.ReadOuterXml();
byte[] byteArray = Encoding.ASCII.GetBytes(str);
MemoryStream stream = new MemoryStream(byteArray);
return stream;
}
}
}
return null; //result;
}
using (WordprocessingDocument myDoc = WordprocessingDocument.Open(ms, true))
{
StreamReader reader = new StreamReader(memStream);
string FullXML = reader.ReadToEnd();
ReplaceCustomXML(myDoc, FullXML);
myDoc.Package.Flush();
//Code to save file
}
I have a problem with a MemoryStream from OpenXML. I succeed with opening a Word file, changing it and downloading it through the HttpResponse if I do all the steps in a single method.
But if I try to do it in two different classes (or methods) by returning the MemoryStream, I get a corrupted word file. I thought about a flushing or buffer problem but I don't find a solution.
Here is the working code :
public void FillTemplateOpenXmlWord(HttpResponse response)
{
string filePath = #"c:\template.docx";
byte[] filebytes = File.ReadAllBytes(filePath);
using (MemoryStream stream = new MemoryStream(filebytes))
{
using (WordprocessingDocument myDoc = WordprocessingDocument.Open(stream, true))
{
// do some changes
...
myDoc.MainDocumentPart.Document.Save();
}
string docx = "docx";
response.Clear();
response.ClearHeaders();
response.ClearContent();
response.AddHeader("content-disposition", "attachment; filename=\"" + docx + ".docx\"");
response.ContentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
response.ContentEncoding = Encoding.GetEncoding("ISO-8859-1");
stream.Position = 0;
stream.CopyTo(response.OutputStream);
response.End();
}
}
Here is the non-working code :
public void OpenFile(HttpResponse response)
{
MemoryStream stream = this.FillTemplateOpenXmlWord();
string docx = "docx";
response.Clear();
response.ClearHeaders();
response.ClearContent();
response.AddHeader("content-disposition", "attachment; filename=\"" + docx + ".docx\"");
response.ContentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
response.ContentEncoding = Encoding.GetEncoding("ISO-8859-1");
stream.Position = 0;
stream.CopyTo(response.OutputStream);
response.End();
}
public MemoryStream FillTemplateOpenXmlWord()
{
string filePath = #"c:\template.docx";
byte[] filebytes = File.ReadAllBytes(filePath);
using (MemoryStream stream = new MemoryStream(filebytes))
{
using (WordprocessingDocument myDoc = WordprocessingDocument.Open(stream, true))
{
// do some changes
...
myDoc.MainDocumentPart.Document.Save();
}
return stream;
}
}
Any idea ?
thank you
Here's what I'm using for generating OpenXML files from memory stream. In this case it makes XLSX file from template on server, but it should be similar for other OpenXml formats.
Controller action:
public class ExportController : Controller
{
public FileResult Project(int id)
{
var model = SomeDateModel.Load(id);
ProjectExport export = new ProjectExport();
var excelBytes = export.Export(model);
FileResult fr = new FileContentResult(excelBytes, "application/vnd.ms-excel")
{
FileDownloadName = string.Format("Export_{0}_{1}.xlsx", DateTime.Now.ToString("yyMMdd"), model.Name)
};
return fr;
}
}
// Helper class
public class ProjectExport
{
private WorkbookPart workbook;
private Worksheet ws;
public byte[] Export(SomeDateModel model)
{
var template = new FileInfo(HostingEnvironment.MapPath(#"~\Export\myTemplate.xlsx"));
byte[] templateBytes = File.ReadAllBytes(template.FullName);
using (var templateStream = new MemoryStream())
{
templateStream.Write(templateBytes, 0, templateBytes.Length);
using (var excelDoc = SpreadsheetDocument.Open(templateStream, true))
{
workbook = excelDoc.WorkbookPart;
var sheet = workbook.Workbook.Descendants<Sheet>().First();
ws = ((WorksheetPart)workbook.GetPartById(sheet.Id)).Worksheet;
sheet.Name = model.Name;
// Here write some other stuff for setting values in cells etc...
}
templateStream.Position = 0;
var result = templateStream.ToArray();
templateStream.Flush();
return result;
}
}
looks like stream is closing when you return. it is in a using block. wouldn't that close the memory stream as soon as the filltemplate procedure ends?
The answer posted by gashac does not describe the issues you are going to get by not calling dispose on a stream.
Not disposing a memory stream causes memory leaks (same as a "using clause").
Memory streams keeps data in memory whereas file streams keeps data on the hdd.
Solution:
Save the memory stream into a byte array, dispose the memory stream and return the bytearray.
How to return bytearray instead stream
See the following thread to return a file as a bytearray:
HttpResponseMessage Content won't display PDF
I have this demo code for iTextSharp
Document document = new Document();
try
{
PdfWriter.GetInstance(document, new FileStream("Chap0101.pdf", FileMode.Create));
document.Open();
document.Add(new Paragraph("Hello World"));
}
catch (DocumentException de)
{
Console.Error.WriteLine(de.Message);
}
catch (IOException ioe)
{
Console.Error.WriteLine(ioe.Message);
}
document.Close();
How do I get the controller to return the pdf document to the browser?
EDIT:
Running this code does open Acrobat but I get an error message "The file is damaged and could not be repaired"
public FileStreamResult pdf()
{
MemoryStream m = new MemoryStream();
Document document = new Document();
PdfWriter.GetInstance(document, m);
document.Open();
document.Add(new Paragraph("Hello World"));
document.Add(new Paragraph(DateTime.Now.ToString()));
m.Position = 0;
return File(m, "application/pdf");
}
Any ideas why this does not work?
Return a FileContentResult. The last line in your controller action would be something like:
return File("Chap0101.pdf", "application/pdf");
If you are generating this PDF dynamically, it may be better to use a MemoryStream, and create the document in memory instead of saving to file. The code would be something like:
Document document = new Document();
MemoryStream stream = new MemoryStream();
try
{
PdfWriter pdfWriter = PdfWriter.GetInstance(document, stream);
pdfWriter.CloseStream = false;
document.Open();
document.Add(new Paragraph("Hello World"));
}
catch (DocumentException de)
{
Console.Error.WriteLine(de.Message);
}
catch (IOException ioe)
{
Console.Error.WriteLine(ioe.Message);
}
document.Close();
stream.Flush(); //Always catches me out
stream.Position = 0; //Not sure if this is required
return File(stream, "application/pdf", "DownloadName.pdf");
I got it working with this code.
using iTextSharp.text;
using iTextSharp.text.pdf;
public FileStreamResult pdf()
{
MemoryStream workStream = new MemoryStream();
Document document = new Document();
PdfWriter.GetInstance(document, workStream).CloseStream = false;
document.Open();
document.Add(new Paragraph("Hello World"));
document.Add(new Paragraph(DateTime.Now.ToString()));
document.Close();
byte[] byteInfo = workStream.ToArray();
workStream.Write(byteInfo, 0, byteInfo.Length);
workStream.Position = 0;
return new FileStreamResult(workStream, "application/pdf");
}
You must specify :
Response.AppendHeader("content-disposition", "inline; filename=file.pdf");
return new FileStreamResult(stream, "application/pdf")
For the file to be opened directly in the browser instead of being downloaded
If you return a FileResult from your action method, and use the File() extension method on the controller, doing what you want is pretty easy. There are overrides on the File() method that will take the binary contents of the file, the path to the file, or a Stream.
public FileResult DownloadFile()
{
return File("path\\to\\pdf.pdf", "application/pdf");
}
I've run into similar problems and I've stumbled accross a solution. I used two posts, one from stack that shows the method to return for download and another one that shows a working solution for ItextSharp and MVC.
public FileStreamResult About()
{
// Set up the document and the MS to write it to and create the PDF writer instance
MemoryStream ms = new MemoryStream();
Document document = new Document(PageSize.A4.Rotate());
PdfWriter writer = PdfWriter.GetInstance(document, ms);
// Open the PDF document
document.Open();
// Set up fonts used in the document
Font font_heading_1 = FontFactory.GetFont(FontFactory.TIMES_ROMAN, 19, Font.BOLD);
Font font_body = FontFactory.GetFont(FontFactory.TIMES_ROMAN, 9);
// Create the heading paragraph with the headig font
Paragraph paragraph;
paragraph = new Paragraph("Hello world!", font_heading_1);
// Add a horizontal line below the headig text and add it to the paragraph
iTextSharp.text.pdf.draw.VerticalPositionMark seperator = new iTextSharp.text.pdf.draw.LineSeparator();
seperator.Offset = -6f;
paragraph.Add(seperator);
// Add paragraph to document
document.Add(paragraph);
// Close the PDF document
document.Close();
// Hat tip to David for his code on stackoverflow for this bit
// https://stackoverflow.com/questions/779430/asp-net-mvc-how-to-get-view-to-generate-pdf
byte[] file = ms.ToArray();
MemoryStream output = new MemoryStream();
output.Write(file, 0, file.Length);
output.Position = 0;
HttpContext.Response.AddHeader("content-disposition","attachment; filename=form.pdf");
// Return the output stream
return File(output, "application/pdf"); //new FileStreamResult(output, "application/pdf");
}
FileStreamResult certainly works. But if you look at the Microsoft Docs, it inherits from ActionResult -> FileResult, which has another derived class FileContentResult. It "sends the contents of a binary file to the response". So if you already have the byte[], you should just use FileContentResult instead.
public ActionResult DisplayPDF()
{
byte[] byteArray = GetPdfFromWhatever();
return new FileContentResult(byteArray, "application/pdf");
}
You can create a custom class to modify the content type and add the file to the response.
http://haacked.com/archive/2008/05/10/writing-a-custom-file-download-action-result-for-asp.net-mvc.aspx
I know this question is old but I thought I would share this as I could not find anything similar.
I wanted to create my views/models as normal using Razor and have them rendered as Pdfs.
This way I had control over the pdf presentation using standard html output rather than figuring out how to layout the document using iTextSharp.
The project and source code is available here with nuget installation instructions:
https://github.com/andyhutch77/MvcRazorToPdf
Install-Package MvcRazorToPdf
You would normally do a Response.Flush followed by a Response.Close, but for some reason the iTextSharp library doesn't seem to like this. The data doesn't make it through and Adobe thinks the PDF is corrupt. Leave out the Response.Close function and see if your results are better:
Response.Clear();
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-disposition", "attachment; filename=file.pdf"); // open in a new window
Response.OutputStream.Write(outStream.GetBuffer(), 0, outStream.GetBuffer().Length);
Response.Flush();
// For some reason, if we close the Response stream, the PDF doesn't make it through
//Response.Close();
HttpContext.Response.AddHeader("content-disposition","attachment; filename=form.pdf");
if the filename is generating dynamically then how to define filename here, it is generating through guid here.
if you return var-binary data from DB to display PDF on popup or browser means follow this code:-
View page:
#using (Html.BeginForm("DisplayPDF", "Scan", FormMethod.Post))
{
View PDF
}
Scan controller:
public ActionResult DisplayPDF()
{
byte[] byteArray = GetPdfFromDB(4);
MemoryStream pdfStream = new MemoryStream();
pdfStream.Write(byteArray, 0, byteArray.Length);
pdfStream.Position = 0;
return new FileStreamResult(pdfStream, "application/pdf");
}
private byte[] GetPdfFromDB(int id)
{
#region
byte[] bytes = { };
string constr = System.Configuration.ConfigurationManager.ConnectionStrings["Connection"].ConnectionString;
using (SqlConnection con = new SqlConnection(constr))
{
using (SqlCommand cmd = new SqlCommand())
{
cmd.CommandText = "SELECT Scan_Pdf_File FROM PWF_InvoiceMain WHERE InvoiceID=#Id and Enabled = 1";
cmd.Parameters.AddWithValue("#Id", id);
cmd.Connection = con;
con.Open();
using (SqlDataReader sdr = cmd.ExecuteReader())
{
if (sdr.HasRows == true)
{
sdr.Read();
bytes = (byte[])sdr["Scan_Pdf_File"];
}
}
con.Close();
}
}
return bytes;
#endregion
}