downloaded xlsx file corrupt useing IIS and c# httpwebrequest - c#

I have the following problem:
When I'm downloading xlsx or docx file, the downloaded file is corrupt. Other types of files (Pdf for example) is working fine. I have also tested different MIME-Types (application/vnd.ms-excel or application/vnd.openxmlformats-officedocument.spreadsheetml.sheet or application/octet-stream). Following Code is in use:
protected void btnActionDocAnsehen_Click(object sender, CommandEventArgs e)
{
string connectionString = ConfigurationManager.ConnectionStrings["dataConnection"].ToString();
SqlConnection conn = new SqlConnection(connectionString);
conn.Open();
//New Code for Downloading file
string filepath = pfad;
#endregion
#region [Datenabruf mit WebDAV]
//Create a stream for the file
Stream stream = null;
//This controls how many bytes to read at a time and send to the client
int bytesToRead = 10000;
// Buffer to read bytes in chunk size specified above
byte[] buffer = new Byte[bytesToRead];
try
{
// --------------- COPY REQUEST --------------- //
// Create an HTTP request for the URL.
HttpWebRequest httpCopyRequest =
(HttpWebRequest)WebRequest.Create(filepath);
httpCopyRequest.Timeout = 3000;
// Pre-authenticate the request.
httpCopyRequest.PreAuthenticate = true;
//Create a response for this request
HttpWebResponse httpCopyResponse = (HttpWebResponse)httpCopyRequest.GetResponse();
if (httpCopyRequest.ContentLength > 0)
httpCopyResponse.ContentLength = httpCopyRequest.ContentLength;
//Get the Stream returned from the response
stream = httpCopyResponse.GetResponseStream();
// prepare the response to the client. resp is the client Response
var resp = HttpContext.Current.Response;
//Indicate the type of data being sent
resp.ContentType = ReturnFiletype(Path.GetExtension(filepath));
//Name the file
resp.AddHeader("Content-Disposition", "attachment; filename=\"" + pfad + "\"");
int length;
do
{
// Verify that the client is connected.
if (resp.IsClientConnected)
{
// Read data into the buffer.
length = stream.Read(buffer, 0, bytesToRead);
// and write it out to the response's output stream
resp.OutputStream.Write(buffer, 0, length);
// Flush the data
resp.Flush();
//Clear the buffer
buffer = new Byte[bytesToRead];
}
else
{
// cancel the download if client has disconnected
length = -1;
}
} while (length > 0); //Repeat until no data is read
}
catch (WebException ex)
{
string message = "Die Datei konnte auf dem Server nicht gefunden werden! Bitte kontaktieren Sie die Administration!";
//string message = filepath;
ClientScript.RegisterStartupScript(GetType(), "alert", "alert('" + message + "');", true);
}
finally
{
if (stream != null)
{
//Close the input stream
stream.Close();
}
}
#endregion
}

If you don't end your HttpContext.Current.Response it will add the page to the bottom of the (File) OutputStream.
Check your file with text editor (notepad) and you will probably see something like this at the bottom:
<!DOCTYPE html>
<html lang="en">
<head><meta charset="utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1.0" /><title>
(...)
I guess it works with pdf's because probably your PdfReader ignores anything that is not properly formatted, or after some tag.
You simply have to add resp.End() after the loop, like
(...)
} while (length > 0); //Repeat until no data is read
resp.End();

Related

File download in chunks in http-context response C#

I have a below scenario.
Client send request to Server-1 for file download
Server-1 send request to Server-2 for file.
To make this work I need to create a mechanism where once client send request to the Server-1, Server-1 will request to Server-2 which will send file as response output-stream in chunks. Server-1 will send this file chunks to client browser continuously as it keep receiving from server-2.
I have done code as below, theoretically it looks fine but still it is not working.
It is not downloading entire file in client browser, it seems like last chunk is not transferred to the Server-1 or it is not downloading to client browser from Server-1
Server-1 Code (Where client request for File download)
private void ProccesBufferedResponse(HttpWebRequest webRequest, HttpContext context)
{
char[] responseChars = null;
byte[] buffer = null;
if (webRequest == null)
logger.Error("Request string is null for Perfios Docs Download at ProccesBufferedResponse()");
context.Response.Buffer = false;
context.Response.BufferOutput = false;
try
{
WebResponse webResponse = webRequest.GetResponse();
context.Response.ContentType = "application/pdf";
context.Response.AddHeader("Content-disposition", webResponse.Headers["Content-disposition"]);
StreamReader responseStream = new StreamReader(webResponse.GetResponseStream());
while (!responseStream.EndOfStream)
{
responseChars = new char[responseStream.ToString().ToCharArray().Length];
responseStream.Read(responseChars, 0, responseChars.Length);
buffer = Encoding.ASCII.GetBytes(responseChars);
context.Response.Clear();
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
context.Response.Flush();
}
}
catch (Exception ex)
{
throw;
}
finally
{
context.Response.Flush();
context.Response.End();
}
}
Server-2 Code (Where Server-1 will send request for file)
private void DownloadInstaPerfiosDoc(int CompanyID, string fileName, string Foldertype)
{
string folderPath;
string FilePath;
int chunkSize = 1024;
int startIndex = 0;
int endIndex = 0;
int length = 0;
byte[] bytes = null;
DirectoryInfo dir;
folderPath = GetDocumentDirectory(CompanyID, Foldertype);
FilePath = folderPath + "\\" + fileName;
dir = new DirectoryInfo(folderPath);
HttpContext.Current.Response.Buffer = false;
HttpContext.Current.Response.BufferOutput = false;
if (dir.Exists && dir.GetFiles().Length > 0)
{
foreach (var file in dir.GetFiles(fileName))
{
FilePath = folderPath + "\\" + file.Name;
FileStream fsReader = new FileStream(FilePath, FileMode.Open, FileAccess.Read);
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("Content-disposition", string.Format("attachment; filename = \"{0}\"", fileName));
int totalChunks = (int)Math.Ceiling((double)fsReader.Length / chunkSize);
for (int i = 0; i < totalChunks; i++)
{
startIndex = i * chunkSize;
if (startIndex + chunkSize > fsReader.Length)
endIndex = (int)fsReader.Length;
else
endIndex = startIndex + chunkSize;
length = (int)endIndex - startIndex;
bytes = new byte[length];
fsReader.Read(bytes, 0, bytes.Length);
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.OutputStream.Write(bytes, 0, bytes.Length);
HttpContext.Current.Response.Flush();
}
}
}
}
Please help me to resolve this issue.
It is possible and feasible. I'll give a pseudo procedure for you to understand the overall idea.
Server1
download action gets hit
create a request to server2
get the response stream of your server2 request
read the response stream in desired chunk sizes until it's consumed completely
write each chunk (as soon as you read) to current response stream
Server2
download action gets hit
write your stream onto your current response stream however you like

How to refresh Memory Stream while downloading from FTP

I have code which downloads file(zip file) from ftp server. But when I change files inside zip file, it is downloading from memory. Telling that I mean when I download zip file in which there are file 1, file 2, file 3. Next time I change zip file inside with file 1, file 2, file 3(but new data in files) and upload it to ftp. When I download from FTP directly using WS_FTP Pro I can see new files. But when I use my code to download zip file I get that file from memory. How can I refresh memory stream so when I download new zip file, I get new files inside zip file.
Mн code is.
public static bool downloadFromWeb(string URL, string file, string targetFolder)
{
try
{
byte[] downloadedData;
downloadedData = new byte[0];
//open a data stream from the supplied URL
WebRequest webReq = WebRequest.Create(URL + file);
WebResponse webResponse = webReq.GetResponse();
Stream dataStream = webResponse.GetResponseStream();
//Download the data in chuncks
byte[] dataBuffer = new byte[1024];
//Get the total size of the download
int dataLength = (int)webResponse.ContentLength;
//lets declare our downloaded bytes event args
ByteArgs byteArgs = new ByteArgs();
byteArgs.downloaded = 0;
byteArgs.total = dataLength;
//we need to test for a null as if an event is not consumed we will get an exception
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
//Download the data
MemoryStream memoryStream = new MemoryStream();
memoryStream.SetLength(0);
while (true)
{
//Let's try and read the data
int bytesFromStream = dataStream.Read(dataBuffer, 0, dataBuffer.Length);
if (bytesFromStream == 0)
{
byteArgs.downloaded = dataLength;
byteArgs.total = dataLength;
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
//Download complete
break;
}
else
{
//Write the downloaded data
memoryStream.Write(dataBuffer, 0, bytesFromStream);
byteArgs.downloaded = bytesFromStream;
byteArgs.total = dataLength;
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
}
}
//Convert the downloaded stream to a byte array
downloadedData = memoryStream.ToArray();
//Release resources
dataStream.Close();
memoryStream.Close();
//Write bytes to the specified file
FileStream newFile = new FileStream(targetFolder + file, FileMode.Create);
newFile.Write(downloadedData, 0, downloadedData.Length);
newFile.Close();
return true;
}
catch (Exception)
{
//We may not be connected to the internet
//Or the URL may be incorrect
return false;
}
}
Please indicate me where I should change to download always new zip file from FTP.
I have added couple strings of code in my previous code. It tells browser not to cache. Here is my code with changes.
public static bool downloadFromWeb(string URL, string file, string targetFolder)
{
try
{
byte[] downloadedData;
downloadedData = new byte[0];
// Set a default policy level for the "http:" and "https" schemes.
HttpRequestCachePolicy policy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Default);
HttpWebRequest.DefaultCachePolicy = policy;
//open a data stream from the supplied URL
WebRequest webReq = WebRequest.Create(URL + file);
// Define a cache policy for this request only.
HttpRequestCachePolicy noCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
webReq.CachePolicy = noCachePolicy;
WebResponse webResponse = webReq.GetResponse();
Stream dataStream = webResponse.GetResponseStream();
//Download the data in chuncks
byte[] dataBuffer = new byte[1024];
//Get the total size of the download
int dataLength = (int)webResponse.ContentLength;
//lets declare our downloaded bytes event args
ByteArgs byteArgs = new ByteArgs();
byteArgs.downloaded = 0;
byteArgs.total = dataLength;
//we need to test for a null as if an event is not consumed we will get an exception
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
//Download the data
MemoryStream memoryStream = new MemoryStream();
memoryStream.SetLength(0);
while (true)
{
//Let's try and read the data
int bytesFromStream = dataStream.Read(dataBuffer, 0, dataBuffer.Length);
if (bytesFromStream == 0)
{
byteArgs.downloaded = dataLength;
byteArgs.total = dataLength;
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
//Download complete
break;
}
else
{
//Write the downloaded data
memoryStream.Write(dataBuffer, 0, bytesFromStream);
byteArgs.downloaded = bytesFromStream;
byteArgs.total = dataLength;
if (bytesDownloaded != null) bytesDownloaded(byteArgs);
}
}
//Convert the downloaded stream to a byte array
downloadedData = memoryStream.ToArray();
//Release resources
dataStream.Close();
memoryStream.Close();
//Write bytes to the specified file
FileStream newFile = new FileStream(targetFolder + file, FileMode.Create);
newFile.Write(downloadedData, 0, downloadedData.Length);
newFile.Close();
return true;
}
catch (Exception)
{
//We may not be connected to the internet
//Or the URL may be incorrect
return false;
}
}
I believe your get-request is being cached by the WebRequest. Try adding a random query parameter and see if it helps.
WebRequest webReq = WebRequest.Create(URL + file + "?nocache=" + DateTime.Now.Ticks.ToString());

Network error downloading files on OSX

I have a site with a list of photos. The user has the option to download each of the photos. The download just writes the result to the output stream.
Here's the code:
[WebMethod]
public static void DownloadPhotoAsset(string assetId)
{
var photoAsset = GetPhotoAsset(assetId);
Stream stream = null;
int bytesToRead = 10000;
byte[] buffer = new Byte[bytesToRead];
try
{
HttpWebRequest fileReq =
(HttpWebRequest)HttpWebRequest.Create(photoAsset.FileAbsoluteUrl);
HttpWebResponse fileResp = (HttpWebResponse)fileReq.GetResponse();
if (fileReq.ContentLength > 0)
fileResp.ContentLength = fileReq.ContentLength;
stream = fileResp.GetResponseStream();
var resp = HttpContext.Current.Response;
resp.ContentType = "application/octet-stream";
resp.AddHeader("Content-Disposition",
"attachment; filename=\"" +
Path.GetFileName(photoAsset.FileAbsoluteUrl) + "\"");
resp.AddHeader("Content-Length", fileResp.ContentLength.ToString());
int length;
do
{
// verify that the client is connected.
if (resp.IsClientConnected)
{
// read data into the buffer and write it out
// to the response's output stream
length = stream.Read(buffer, 0, bytesToRead);
resp.OutputStream.Write(buffer, 0, length);
// flush the data and clear the buffer
resp.Flush();
buffer = new Byte[bytesToRead];
}
else
length = -1; // cancel the download if client has disconnected
} while (length > 0); //Repeat until no data is read
}
finally
{
if (stream != null)
stream.Close(); // close the input stream
}
}
This works fine in every browser on Windows, but I get network connection issues on Macs.
In Safari, the download stops after a second and says "The network connection was lost"
In Chrome, the error says "Failed - Network Error"
In Firefox, the error says "Download Error - image.jpeg.part could not be saved, because the source file could not be read"
I've checked on two different Macs with OSX 10.7.4 and and OSX 10.8.3
Anyone know what I'm doing wrong here?

download zip files by use of reader in c#

I have been working on this application that enables user to log in into another website, and then download specified file from that server. So far I have succeeded in logging on the website and download the file. But everything ruins when it comes to zip files.
Is there any chunk of code that could be helpful in reading the .zip files byte by byte or by using stream reader?
I m using downloadfile() but its not returning the correct zip file.
I need a method by which I can read zip files. Can I do it by using ByteReader()
The code used to download zip file is
string filename = "13572_BranchInformationReport_2012-05-22.zip";
string filepath = "C:\\Documents and Settings\\user\\Desktop\\" + filename.ToString();
WebClient client = new WebClient();
string user = "abcd", pass = "password";
client.Credentials = new NetworkCredential(user, pass);
client.Encoding = System.Text.Encoding.UTF8;
try
{
client.DownloadFile("https://web.site/archive/13572_BranchInformationReport_2012-05-22.zip", filepath);
Response.Write("Success");
}
catch (Exception ue)
{
Response.Write(ue.Message);
}
Thanks in advance.
is there any chunk of code that could be helpful in reading the zip files bytes by bytes aur by using stream reader.
Absolutely not. StreamReader - and indeed any TextReader is for reading text content, not binary content. A zip file is not text - it's composed of bytes, not characters.
If you're reading binary content such as zip files, you should be using a Stream rather than a TextReader of any kind.
Note that WebClient.DownloadFile and WebClient.DownloadData can generally make things easier for downloading binary content.
Another simple way to downlaod zip file
<asp:HyperLink ID="HyperLink1" runat="server" NavigateUrl="~/DOWNLOAD/Filename.zip">Click To Download</asp:HyperLink>
Another solution
private void DownloadFile()
{
string getPath = "DOWNLOAD/FileName.zip";
System.IO.Stream iStream = null;
byte[] buffer = new Byte[1024];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
// Identify the file to download including its path.
string filepath = Server.MapPath(getPath);
// Identify the file name.
string filename = System.IO.Path.GetFileName(filepath);
try
{
// Open the file.
iStream = new System.IO.FileStream(filepath, System.IO.FileMode.Open,
System.IO.FileAccess.Read, System.IO.FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
// Page.Response.ContentType = "application/vnd.android.package-archive";
// Page.Response.ContentType = "application/octet-stream";
Page.Response.AddHeader("Content-Disposition", "attachment; filename=" + filename);
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 1024);
// Write the data to the current output stream.
Page.Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
Page.Response.Flush();
// buffer = new Byte[1024];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
// Trap the error, if any.
Page.Response.Write(ex.Message);
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
Page.Response.Close();
}
}
}
Your answer
WebRequest objRequest = System.Net.HttpWebRequest.Create(url);
objResponse = objRequest.GetResponse();
byte[] buffer = new byte[32768];
using (Stream input = objResponse.GetResponseStream())
{
using (FileStream output = new FileStream ("test.doc",
FileMode.CreateNew))
{
int bytesRead;
while ( (bytesRead=input.Read (buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
}
This is how i achieved it. Thanks everyone for ur help

Download function not showing total size of file, while downloading

protected void downloadFunction(string filename)
{
string filepath = #"D:\XtraFiles\" + filename;
string contentType = "application/x-newton-compatible-pkg";
Stream iStream = null;
// Buffer to read 1024K bytes in chunk
byte[] buffer = new Byte[1048576];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
try
{
// Open the file.
iStream = new FileStream(filepath, FileMode.Open, FileAccess.Read, FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
HttpContext.Current.Response.ContentType = contentType;
HttpContext.Current.Response.AddHeader("Content-Disposition", "attachment; filename=" + HttpUtility.UrlEncode(filename, System.Text.Encoding.UTF8));
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (HttpContext.Current.Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
HttpContext.Current.Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
HttpContext.Current.Response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
// Trap the error, if any.
HttpContext.Current.Response.Write("Error : " + ex.Message + "<br />");
HttpContext.Current.Response.ContentType = "text/html";
HttpContext.Current.Response.Write("Error : file not found");
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
}
HttpContext.Current.Response.End();
HttpContext.Current.Response.Close();
}
}
My donwload function is working perfect, but when users are downloading the browser cant see the total file size of the download.
So now the browser says eq. Downloading 8mb of ?, insted of Downloading 8mb of 142mb.
What have i missed?
The Content-Length header seems to be what you are missing.
If you set this the browser will then know how much to expect. Otherwise it will just keep going til you stop sending data and it won't know how long it is until the end.
Response.AddHeader("Content-Length", iStream.Length);
You may also be interested in Response.WriteFile whcih can provide an easier way to send a file to a client without having to worry about streams yourself.
You need to send a ContentLength-Header:
HttpContext.Current.Response.AddHeader(HttpRequestHeader.ContentLength, iStream.Length);

Categories

Resources