WebClient DownloadFile method downloads damaged PDF files - c#

In my Windows application I am using WebClient DownloadFile method to download several PDF files from a server on local network.
Each file is a report that gets generated when its URL is called. Reports are of different sizes and take different periods of time to get generated.
My code loops through a list of URLs (about 400), and for each URL it calls DownloadFile method, and the corresponding report is generated and downloaded to local machine. URLs are definitely correct.
The problem is that almost each time the application is run, some of downloaded files are damaged, only 7KBs are downloaded (I think it’s for meta data), and Acrobat Reader gives me a message when I try to open the file:
“…it’s either not a supported file type or because the file has been damaged…”
It’s not always the same files that get damaged, and when I re-run the application, those files often succeed, and some others might fail… it seems to be random and I can’t find out the criteria.
Note 1: I don’t want a file to start download until its precedence is completely downloaded, that’s why I am not using the asynchronous method.
Note 2: All files are Oracle reports and get generated by querying a database.
Note 3: No EXCEPTION is thrown in case of damaged files.
Here is my code :
using ( WebClient client = new WebClient() )
{
for(int i=0; i< URL_List.Length; i++)
{
try
{
client.DownloadFile( URL_List[i] , myLocalPath+fileName+".pdf" );
}
catch(Exception x)
{
// write exception message to error log...
}
}
}
Thanks in advance.

Related

Downloading a file in C# incorrectly returns files that is zero bytes long

So I'm trying to Download a file using WebClient class but the problem is that when the download is finished the file that should be downloaded is 0 byte, I tried uploading the same file without extension and than changing it after download but that didn't help. What Can I do? This is the code I Use
WebClient updateDownloader = new WebClient();
updateDownloader.DownloadFile(new Uri("http://zazaia.ucoz.com/SomeExeFile.exe"),
Application.StartupPath + "\\SomeFile.EXE");
And also have DownloadCompleted event handler which just shows MessageBox and Disposes the WebClient.
There is nothing wrong with the code you have shown and this should work. The problem is on the server which is not returning the file properly. Also make sure that the site you are querying doesn't require some authentication before being able to download files. In addition to that don't forget that a WebClient will not execute any javascript, so if the server relies on it to download the file, this will not happen.
Have you checked that your antivirus is not interfering? Sometimes an automatic scan will lock an executable file being downloaded until it passes. The client code itself looks fine however.
What about the server side? If is one of your own applications serving the download, then it may not be setting the MIME header or even not handling the download correctly at all

wait for Download complete

I need to download a zip file created in realtime from a webservice.
Let me explain.
I am developing a web application that uses a SoapXml webservice. There is the Export function in the webservice that returns a temporary url to download the file. Upon request to download, the server creates the file and makes it available for download after a few seconds.
I'm trying to use
webClient.DownloadFile(url, #"c:/etc../")
This function downloads the file and saves it to me 0kb. is too fast! The server does not have time to create the file. I also tried to put
webClient.OpenRead(url);
System.Threading.Thread.Sleep(7000);
webClient.DownloadFile(url, #"c:/etc../");
but does not work.
In debug mode if I put a BREAK POINT on webClient.DownloadFile and I start again after 3, 4 seconds, the server have the time to create the file and I have a full download.
The developers of the webservice suggested me to use "polling" on the url until the file gets ready for the download. how does it work?
How can I do to solve my problem? (I also tried to DownloadFile Asynchronous mode )
I have similar mechanism in my application, and it works perfectly. WebClient does request, and waits, because server is creating response(file). If WebClient downloads 0kb that means that server responded to request with some empty response. This may not be a bug, but a design. If creating file takes long time, this method could result in timeouts. On the other hand if creating file takes short time, server side should respond with file(making WebClient hang on request, till the file is ready). I would try to discuss this matter with other developers and maybe redesign "file generator".
EDIT: Pooling means making requests in loop, for example every 2 seconds. I'm using DownloadData because it's useless, and resource consuming, to save empty file every time, which DownloadFile does.
public void PoolAndDownloadFile(Uri uri, string filePath)
{
WebClient webClient = new WebClient();
byte[] downloadedBytes = webClient.DownloadData(uri);
while (downloadedBytes.Length == 0)
{
Thread.Sleep(2000);
downloadedBytes = webClient.DownloadData(uri);
}
Stream file = File.Open(filePath, FileMode.Create);
file.Write(downloadedBytes, 0, downloadedBytes.Length);
file.Close();
}

How do I make WebClient.DownloadFile() work with xnb, xgs, xsb, xwb files?

I'm creating an Updater program in C# for my PC game that basically sends an Http message to the server to find out what the latest version of the game is. If there is a newer version, it downloads the necessary files. To download the files I used the WebClient.DownloadFile() method. There are a few posts on the forums detailing problems with this method but none of them are quite like mine.
I use the method like this:
WebClient webClient = new WebClient();<br/>
webClient.DownloadFile(sOriginFile, sDestinationFile);
I immediately ran into a problem downloading any files with the following extensions:
.xnb
.xgs
.xsb
.xwb
I would get an exception stating "The remote server returned an error: (404) Not Found."
So as an experiment I added 3, more common, files to the same directory.
.txt
.doc
.jpg
and the DownloadFile() method worked perfectly for those files. Does anybody know why this method isn't working for the first 4 files types but works fine with the last 3?
Also I tried WebClient.DownloadData() and HttpWebRequest.GetResponse() (after setting up the request), I even tried reversing the extension name on the server (.bnx), but no matter what, I would get the same exact exception.
If anybody really wants to tackle this, here are links to 2 sample files (I tried to post all 7 sample files but Stack Overflow only allows me to post 2 links):
http://www.facepuncher.com/Versions/CastleAbra/1.1/Sample.txt
http://www.facepuncher.com/Versions/CastleAbra/1.1/UiCursor.xnb
Most likely the MIME-Settings for the file types you mention are set up incorrectly in IIS. Go to IIS Server Mananger -> MIME-Settings and add the file-types accordingly.
Probably a better idea to transfer any filetype would be to download only files like
file.xnb.dat
file.xgs.dat
and rename them locally.
-Matthias

C# System.Net WebClient.DownloadData - Error retrieving proper data

Perhaps the main issue is where I am uploading to - I am currently using MediaFire to upload my files. I have tested with downloading files of both ".exe" as well as ".png" formats; neither seem to work for me though.
The issue that is constantly occurring for me:
When I attempt to download a file, (I will put the URL's of the two files at the end of my question), the amount of data retrieved is either far greater - or far less than the actual size of the file. For example, I uploaded a blank VB6 executable file which is 16kb. The downloaded file comes out to be nearly 60 kilobytes!
Some things to note:
1) Both files download with no problems through Chrome (and I'm assuming other browsers as well).
2) I have tried multiple methods of retrieving the data from the downloaded file (same result).
My Code:
// Create a new instance of the System.Net 'WebClient'
System.Net.WebClient client = new System.Net.WebClient();
// Download URL
// PNG File
string url = #"http://www.mediafire.com/imageview.php?quickkey=a16mo8gm03fv1d9&thumb=4";
// EXE File (blank VB6 exe file # 16kb)
// string url = #"http://www.mediafire.com/download.php?nn1cupi7j5ia7cb";
// Destination
string savePath = Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + #"\Test.png";
byte[] result = client.DownloadData(url);
MessageBox.Show(result.Length.ToString()); // Returns 57,000 something (over 3x larger than my EXE file!).
// Write downloaded data to desired destination
System.IO.File.WriteAllBytes(savePath, result);

c#,asp.net, zipping file takes long time and browser become ir-responsive

I have images on my website where the user can download selected images. When the user selects more than one image the files are downloaded as zip files; the zipping process takes place on the server.
However, when the user selects more files( the size also increase let say by 500MB) so when he presses download the zipping starts on server. The web page is hanging and the user can't do anything until the zipping process has completed.
Sometimes the browser (like Chrome) gives messages (process is taking too long , kill this process). So I am looking for some help here.
I need a solid suggestion
Thanks
my code for zipping the file is:
public string Zip(string f, bool original)
{
string zip = "";
try
{
files = HttpContext.Current.Server.UrlDecode(files);
string[] fileCollection = files.Split('*');
zipFile = class1.zipfile(fileCollection, IsOriginal);
int fileLength = files.Length;
}
catch (Exception ex)
{
Console.WriteLine("Exception during processing {0}", ex);
}
return File;
Three suggestions:
Show a busy screen on the client
that lets the user know that he's
waiting for a file download. You
could achieve this i.e. by
downloading to a separate iframe and
checking the status of the iframe
(for a dynamic approach see
Dynamically created iframe used to download file triggers onload with firebug but not without)
Check how you actually zip the files - is compression required or do you just want to download all pictures
together? Zipping with no
compression is usually much faster.
If it still takes too long - even
for a busy screen - consider using
an asynchronous approach: A user may
request a zip file with the images
he/she selected, which causes the
system to start processing the zip
file. The user then may wait or come
back to a dedicated page to check on
the status of the zip file and, if
it's ready, download it immediately
from the server (i.e. from a virtual
directory you put it after
processing). This approach is a
little more involved, also you have
to think about cleanup.
I would split off the zipping to a separate process (BackgroundWorker process?) and then update a status bar (AJAX) to keep the user informed of approximately how long it's going to take.

Categories

Resources