Pdf is not getting open some time after BinaryWrite - c#

I created the functionality to edit the existing PDf content(adding some text and images) and after that i am opening this pdf file for print or download and i am using this code
Pdf is not getting open some time after BinaryWrite
byte[] outBuf = outStream1.GetBuffer();
HttpContext.Current.Response.Expires = 0;
HttpContext.Current.Response.Buffer = true;
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename="test.pdf");
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.ContentEncoding = new System.Text.UTF8Encoding();
HttpContext.Current.Response.BinaryWrite(outBuf);
outStream.Close();
HttpContext.Current.Response.End();
It is working fine on local machine or development server and some time on server.
On server after some times it is opening blank page. And after browser cache clear or temp file clear it works
I am not getting it where is the problem may be some client browser memory problem.
But if it is browser memory issue the it should also come on local too because I am using the same browser.
Please give me some idea or solution so that I can sort our this thing.

The GetBuffer method only returns exactly the content of the memory stream if it was created as a readonly stream from an array of bytes to begin with. Otherwise it returns the internal buffer, which may contain unused bytes at the end.
Use the ToArray method to get exactly the content of the memory stream, and nothing more:
byte[] outBuf = outStream1.ToArray();

Another thing that triggers some browsers some of the time to treat your stream as PDF, is to append some dummy argument to your URL so that it ends with ".pdf". I am not sure if this is really necessary in your case since your are setting the 'content-disposition' to 'attachment'. It won't hurt though.

Related

Cannot Send File in Response

I have a code set that runs on the server, which correctly generates a zip file and stores it on the server. I have that file location as a physical path.
Nothing I have attempted has allowed me to use the response to the client to download that file.
Attempt 1:
System.IO.FileInfo fi = new System.IO.FileInfo(zipFilePath);
//setup HTML Download of provided Zip.
//application/zip
Response.ClearContent();
Response.Clear();
Response.ClearHeaders();
Response.Buffer = true;
Response.ContentType = "application / zip";
Response.AddHeader("Content-Disposition",
"attachment; filename=\"" + System.IO.Path.GetFileName(zipFilePath) + "\";");
Response.AddHeader("Content-Length", fi.Length.ToString());
Response.TransmitFile(zipFilePath);
Response.Flush();
Response.End();
No result. Code executes without error but there is no download to the client.
Attempt 2:
//Almost the same as attempt 1, but with WriteFile instead
Response.WriteFile(zipFilePath);
No Result, same as Attempt 1.
Attempt 3:
//Note: Same Header Section as Attempts 1 and 2
System.IO.BinaryReader reader = new System.IO.BinaryReader(new System.IO.FileStream(zipFilePath, System.IO.FileMode.Open));
int CHUNK = 1024;
List<byte> FileArray = new List<byte>();
while (reader.BaseStream.Position < reader.BaseStream.Length)
FileArray.AddRange(reader.ReadBytes(CHUNK));
byte[] bArray = FileArray.ToArray();
reader.Close();
Response.OutputStream.Write(bArray, 0, bArray.Length);
Response.Flush();
Response.End();
No Result, Same as previous attempts
Attempt 4:
//Identical to Attempt 3, but using BinaryWrite
Response.BinaryWrite(bArray);
No Result, Same as previous Attempts.
The Question
Every one of these code blocks runs with no error, But The Save File dialog NEVER appears. I get nothing at all. I cannot figure out for the life of me what I might be missing.
The File Path has been verified as correct
The Code is running on the server, not on the client, I cannot use the 'WebClient.Download' method for this reason
If anyone has any suggestions, I'm all ears. I have no idea how to get this file to download to the client.
I tested your code (attempt 1) and got it working fine with a test file. If the file path would be wrong, you'd get an System.IO.FileNotFoundException so that's probably not the issue.
A couple of ways to address this:
Try inspecting the webpage in, for example, Chrome by right-clicking
and choose inspect. Then click on Network tab, and refresh the
page (where you're supposed to get the file). Check the response
headers for that request - what is it?
Try setting content-type to application/octet-stream
Use debugger in Visual Studio and step through.
This turned out to be an Ajax related error causing issues between UpdatePanels and POST Responses.
The issue was fixed on the page load of the page by adding the call
ScriptManager.GetCurrent(Page).RegisterPostBackControl(btnGenerate);

IE9 randomly popping Windows Security Dialog when I send down a binary blob as a .xlsx

So on our website, we have multiple reports that can be downloaded as an Excel Spreadsheet, we accomplish this by reading in a blank template file from the harddrive, copying it into a MemoryStream, pushing the data into the template with DocumentFormat.OpenXml.Spreadsheet; Then we pass the MemoryStream to a function that sets the headers and copies the stream into the Response.
Works GREAT in FF & Chrome, but IE9 (and 8, so my QA tells me) randomly pop a Windows Security login dialog asking you to log into the remote server. I can either cancel the dialog, or hit ok (the credentials seem to be ignored), and get the Excel file as expected. Looking at the queries (using CharlesProxy) I cannot get the login dialog to pop until I disable CharlesProxy again, so I cannot see if there's any difference in the traffic between my dev machine and the server. It also doesn't happen when running debug from my local-host, just from the Dev/Test server.
Any help would be useful, the code in question follows. This is called out of a server-side function in the code behind, hence the RespondAsExcel clears the response and puts in the xlsx instead.
using (MemoryStream excelStream = new MemoryStream())
{
using (FileStream template = new FileStream(Server.MapPath(#"Reports\AlignedTemplateRII.xlsx"), FileMode.Open, FileAccess.Read))
{
Master.CopyStream(template, excelStream);
}
//Logic here to push data into the Memory stream using DocumentFormat.OpenXml.Spreadsheet;
Master.RespondAsExcel(excelStream, pgmName);
}
public void RespondAsExcel(MemoryStream excelStream, string fileName)
{
var contenttype = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.Clear();
Response.ContentType = contenttype;
fileName = Utils.ReplaceWhiteSpaceWithUnderScores(fileName);
Response.AddHeader("content-disposition", "inline;filename=" + fileName);
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.BinaryWrite(excelStream.ToArray());
//If that doesn't work, can try this way:
//excelStream.WriteTo(Response.OutputStream);
Response.End();
}
public void CopyStream(Stream source, Stream destination)
{
byte[] buffer = new byte[32768];
int bytesRead;
do
{
bytesRead = source.Read(buffer, 0, buffer.Length);
destination.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
A couple of ideas come to mind regarding that "extra authentication dialog" that can always be dismissed...won't promise this is your issue, but it sure smells like a first-cousin of it.
Office 2007 and later documents open HTTP-based repositories with the WebClient libraries, which do not honor any of IE's security zone filters when requests are made. If the file is requested by IE, and host URL contains dots (implying a FQDN), even if the site is anonymously authenticated (requiring no credentials), you'll get the "credential" dialog that can be cancelled or simply clicked three times and discarded. I was dealing with this problem just yesterday, and as best I can tell, there's no workaround if the file is delivered with IE. There's some quirk about how IE delivers the file that makes Office apps believe it has to authenticate the request before opening it, even though the file has already been delivered to the client!
The dialog issue may be overcome if the document is delivered from a host server in the same domain as the requesting server, eg some-server.a.domain.com to my-machine.a.domain.com.
The second idea is something strictly born of my own experience - that the openoffice vendor format types sometimes introduce their own set of oddness in document stream situations. We've just used a type of application/vnd.ms-excel and, while it seems it should map to the same applications, the problems don't seem to be as prevalent.
Perhaps that can give you some thoughts on going forward. Ultimately, right now, I don't think there's an ideal solution for the situation you're encountering. We're in the same boat, and had to tell our in-house clients that get the dialog to just hit "Cancel," and they get the document they want.
In your RespondAsExcel() method, change your content-dispositon response header from inline to attachment. This will force the browser to open the file as read only. See KB899927.
Response.AddHeader("content-disposition", "attachment;filename=" + fileName);
I had something similar with VBScript when using "Response.ContentType="application/vnd.ms-excel". I simply added the following code and the Windows Security popup window no longer appeared:
Response.AddHeader "content-disposition","attachment; filename=your_file_name_here.xls"

how to save/download pdf embedded in web page without a pdf filename

I'm writing a web scraping program in C#. So far, I have been able to log in to website, save cookie, and return source code of another page. From this source code, I get a link that takes me to a pdf, but the page doesn't end with .pdf extension. In the browser, this page shows the pdf image and there are controls in the browser including a save button.
I believe the pdf page was created with ColdFusion as it has .cfm, CFID and CFTOKEN in the URL.
How do I save this pdf file programmatically?
Two answers have suggested I save the binary stream to pdf. How do I get the binary data in the first place? I have tried the following:
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(billURL);
using (WebResponse response = wr.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
}
}
}
Do I then want to save result as a pdf, or am I doing something wrong there?
The common method in CF for streaming a PDF to the browser is using this method:
<cfheader name="Content-Disposition" value="attachment;filename=#PDFFileName#">
<cfcontent type="application/pdf" reset="true" variable="#toBinary(PDFinMemory)#">
Use a C# WebRequest to get the URL of the PDf. Then check the response header for a 'Content-Type of 'application/pdf'. If so, save the binary stream to a PDF file on disk.
Assuming the CFID and CFTOKEN are not really needed, (you can test the URL without CFID and CFTOKEN and see if you can still fetches the PDF successfully)
Use WebRequest to make a GET request to that URL (see: http://support.microsoft.com/kb/307023)
Save the binary stream as a PDF File.
I get a link that takes me to a pdf, but the page doesn't end with
.pdf extension ..
How do I get the binary data in the first place?
In addition to the other suggestions, one small clarification. The file extension does not really matter. What is important is the content. A .cfm script can return any content-type, not just text/html. So it can mimic a pdf, image, etcetera. As long as your link returns type application/pdf you should get back a binary stream (ie the pdf) you can save to a file. The original file name can be obtained from the WebResponse headers.

ASP.NET - Sending a PDF to the user

I have a process that creates a PDF. I want these PDF's to be temporary and short lived. I want to be able to perform the following when the user clicks a button:
string CreatePDF()//returns fileName.pdf
PromptUserToDownloadPDF()
DeletePDF(fileName.pdf)
I want to avoid having to create a cleanup procedure and deal with any race conditions that arise from users concurrently creating PDF's while running cleanup.
In winforms, I would synchronously prompt a user to download a file. How can I do a similar task in web?
UPDATE
Please note that I am using a 3rd party app to create the PDF's (Apache FOP). Basically I (will) have a function that invokes the command line:
C:>fop "inputfile" "output.pdf"
So, in memory is not an option...that is unless I could somehow do like....
string CreatePDF()//returns fileName.pdf
string RecreatePDFInMemory()
DeletePDF(fileName.pdf)
PromptUserToDownloadPDF()
Something like this:
byte[] _pdfbytes = CreatePDF();
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Length", _pdfbytes.Length.ToString());
Response.BinaryWrite(_pdfbytes);
Since this creates the PDF in memory, you don't need to worry about cleanup.
Edit for OP's edit:
From within CreatePDF, You can use Path.GetTempFileName to create a temp file and execute "fop" to write to that file. Delete that file immediately before returning the byte[]. I recommend doing this delete inside of a finally block. However, "Fop" does support having its output piped to stdout. Having the CreatePDF function grab that is probably cleaner.
Look into doing something along these lines.
Similar to what someone referred to in a different answer, you don't need to save the PDF file on your system, you can just send it as a response.
I'm not sure how you're creating your PDF, but try looking into this below and seeing if your process could use something like this.
HttpResponse currentResponse = HttpContext.Current.Response;
currentResponse.Clear();
currentResponse.ClearHeaders();
currentResponse.ContentType = "application/pdf";
currentResponse.AppendHeader("Content-Disposition", "attachment; filename=my.pdf");
//create the "my.pdf" here
currentResponse.Flush();
currentResponse.End();
Not sure of your process but you should be able to write the PDF to a byte[] and skip writing to the disk altogether.
byte[] pdf = GetPDFBytes(filename)
MemoryStream pdfStream = new MemoryStream(pdf);
Then use the pdfStream to send back to a user.
You can stream out a file with an asp.net page.
I tried to find very old article for you which demonstrates this with a GIF (there's not an actual file)
http://www.informit.com/articles/article.aspx?p=25487
It makes a special page which streams out the data (sets content type appropriately in the header).
Similarly, you can make a "page" to stream out the PDF - it might not even need to ever reside on disk, but if it did, you could delete it after streaming it to the browser.

How do I seamlessly compress the data I post to a form using C# and IIS?

I have to interface with a slightly archaic system that doesn't use webservices. In order to send data to this system, I need to post an XML document into a form on the other system's website. This XML document can get very large so I would like to compress it.
The other system sits on IIS and I use C# my end. I could of course implement something that compresses the data before posting it, but that requires the other system to change so it can decompress the data. I would like to avoid changing the other system as I don't own it.
I have heard vague things about enabling compression / http 1.1 in IIS and the browser but I have no idea how to translate that to my program. Basically, is there some property I can set in my program that will make my program automatically compress the data that it is sending to IIS and for IIS to seamlessly decompress it so the receiving app doesn't even know the difference?
Here is some sample code to show roughly what I am doing;
private static void demo()
{
Stream myRequestStream = null;
Stream myResponseStream = null;
HttpWebRequest myWebRequest = (HttpWebRequest)System.Net
.WebRequest.Create("http://example.com");
byte[] bytMessage = null;
bytMessage = Encoding.ASCII.GetBytes("data=xyz");
myWebRequest.ContentLength = bytMessage.Length;
myWebRequest.Method = "POST";
// Set the content type as form so that the data
// will be posted as form
myWebRequest.ContentType = "application/x-www-form-urlencoded";
//Get Stream object
myRequestStream = myWebRequest.GetRequestStream();
//Writes a sequence of bytes to the current stream
myRequestStream.Write(bytMessage, 0, bytMessage.Length);
//Close stream
myRequestStream.Close();
WebResponse myWebResponse = myWebRequest.GetResponse();
myResponseStream = myWebResponse.GetResponseStream();
}
"data=xyz" will actually be "data=[a several MB XML document]".
I am aware that this question may ultimately fall under the non-programming banner if this is achievable through non-programmatic means so apologies in advance.
I see no way to compress the data on one side and receiving them uncompressed on the other side without actively uncompressing the data..
No idea if this will work since all of the examples I could find were for download, but you could try using gzip to compress the data, then set the Content-Encoding header on the outgoing message to gzip. I believe that the Length should be the length of the zipped message, although you may want to play with making it the length of the unencoded message if that doesn't work.
Good luck.
EDIT I think the issue is whether the ISAPI filter that supports compression is ever/always/configurably invoked on upload. I couldn't find an answer to that so I suspect that the answer is never, but you won't know until you try (or find the answer that eluded me).

Categories

Resources