I am attempting to download a large 25 megabyte pdf file using the following code:
string url = "http://aaa.aaa/test.pdf";
string clientfile = HttpContext.Current.Server.MapPath("~/123.pdf");
WebClient wc = new WebClient();
wc.DownloadFile(new Uri(url, UriKind.Absolute), clientfile);
However the file signature is corrupted and does not download correctly. Is there a way to delay the file download before the download actually starts?
I know the file is correct since if I download it from a browser, the file is not corrupted.
Thanks
I found the problem. When downloading a pdf file, it is actually zipped in a format called "GZIP". When using a browser, the file is automatically unzipped for the client. When using the WebClient DownloadFile method in .NET, it must be done manually. First create a class to derive the WebClient class as follows:
public class GzipWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
HttpWebRequest request = base.GetWebRequest(address) as HttpWebRequest;
request.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
return request;
}
}
Then change your code to use the above class:
string url = "http://aaa.aaa/test.pdf";
string clientfile = HttpContext.Current.Server.MapPath("~/123.pdf");
GzipWebClient wc = new GzipWebClient();
wc.DownloadFile(new Uri(url, UriKind.Absolute), clientfile);
The pdf file will now be correctly downloaded and unzipped automatically.
Related
Here I download a file from website which works good. But I wanted to download a Folder with contents.
Before Download Example
MyprojectApp/Test.exe
My Code
WebClient wc = new WebClient();
String Filename = "some.txt";
Uri uri = new Uri("http://127.0.0.1/New/" + Filename);
wc.DownloadFileAsync(uri, "some1.txt");
After Download Example(What I need)
MyprojectApp/Test.exe
MyprojectApp/New/some.txt
I am using "webclient" to download and save a file by url in windows application.
here is my code:
WebClient wc = new WebClient();
wc.Headers.Add(HttpRequestHeader.Cookie, cc);
wc.DownloadFile(new Uri(e.Url.ToString()), targetPath);
this is working fine local system.(downloading the file and saved to target path automatically with out showing any popup).
But when i am trying to execute the .exe in server its showing save/open popup.
Is there any modifications require to download a file in server settings.
Please help me to download the file with out showing popup in server too.
thanks in advance..
Finally i got the solution for this issue..
herw the code:
WebClient wc = new WebClient();
wc.Headers.Add(HttpRequestHeader.Cookie, cc);
using (Stream data = wc.OpenRead(new Uri(e.Url.ToString())))
{
using (Stream targetfile = File.Create(targetPath))
{
data.CopyTo(targetfile);
}
}
here i just replaced the code
wc.DownloadFile(new Uri(e.Url.ToString()), targetPath);
with the blow lines:
using (Stream data = wc.OpenRead(new Uri(e.Url.ToString())))
{
using (Stream targetfile = File.Create(targetPath))
{
data.CopyTo(targetfile);
}
}
Now its working fine..
Thanks all for ur response..
I am reading File with File.OpenRead method, I am giving this path
http://localhost:10001/MyFiles/folder/abc.png
I have tried this as well but no luck
http://localhost:10001//MyFiles//abc.png
but its giving
URL Formats are not supported
When I give physical path of my Drive like this,It works fine
d:\MyFolder\MyProject\MyFiles\folder\abc.png
How can I give file path to an Http path?
this is my code
public FileStream GetFile(string filename)
{
FileStream file = File.OpenRead(filename);
return file;
}
Have a look at WebClient (MSDN docs), it has many utility methods for downloading data from the web.
If you want the resource as a Stream, try:
using(WebClient webClient = new WebClient())
{
using(Stream stream = webClient.OpenRead(uriString))
{
using( StreamReader sr = new StreamReader(stream) )
{
Console.WriteLine(sr.ReadToEnd());
}
}
}
You could either use a WebClient as suggested in other answers or fetch the relative path like this:
var url = "http://localhost:10001/MyFiles/folder/abc.png";
var uri = new Uri(url);
var path = Path.GetFileName(uri.AbsolutePath);
var file = GetFile(path);
// ...
In general you should get rid of the absolute URLs.
The best way to download the HTML is by using the WebClient class. You do this like:
private string GetWebsiteHtml(string url)
{
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader reader = new StreamReader(stream);
string result = reader.ReadToEnd();
stream.Dispose();
reader.Dispose();
return result;
}
Then, If you want to further process the HTML to ex. extract images or links, you will want to use technique known as HTML scrapping.
It's currently best achieved by using the HTML Agility Pack.
Also, documentation on WebClient class: MSDN
Here I found this snippet. Might do exactly what you need:
using(WebClient client = new WebClient()) {
string s = client.DownloadFile(new Uri("http://.../abc.png"), filename);
}
It uses the WebClient class.
To convert a file:// URL to a UNC file name, you should use the Uri.LocalPath property, as documented.
In other words, you can do this:
public FileStream GetFile(string url)
{
var filename = new Uri(url).LocalPath;
FileStream file = File.OpenRead(filename);
return file;
}
I have files on a server that can be accessed from a URL formatted like this:
http:// address/Attachments.aspx?id=GUID
I have access to the GUID and need to be able to download multiple files to the same folder.
if you take that URL and throw it in a browser, you will download the file and it will have the original file name.
I want to replicate that behavior in C#. I have tried using the WebClient class's DownloadFile method, but with that you have to specify a new file name. And even worse, DownloadFile will overwrite an existing file. I know I could generate a unique name for every file, but i'd really like the original.
Is it possible to download a file preserving the original file name?
Update:
Using the fantastic answer below to use the WebReqest class I came up with the following which works perfectly:
public override void OnAttachmentSaved(string filePath)
{
var webClient = new WebClient();
//get file name
var request = WebRequest.Create(filePath);
var response = request.GetResponse();
var contentDisposition = response.Headers["Content-Disposition"];
const string contentFileNamePortion = "filename=";
var fileNameStartIndex = contentDisposition.IndexOf(contentFileNamePortion, StringComparison.InvariantCulture) + contentFileNamePortion.Length;
var originalFileNameLength = contentDisposition.Length - fileNameStartIndex;
var originalFileName = contentDisposition.Substring(fileNameStartIndex, originalFileNameLength);
//download file
webClient.UseDefaultCredentials = true;
webClient.DownloadFile(filePath, String.Format(#"C:\inetpub\Attachments Test\{0}", originalFileName));
}
Just had to do a little string manipulation to get the actual filename. I'm so excited. Thanks everyone!
As hinted in comments, the filename will be available in Content-Disposition header. Not sure about how to get its value when using WebClient, but it's fairly simple with WebRequest:
WebRequest request = WebRequest.Create("http://address/Attachments.aspx?id=GUID");
WebResponse response = request.GetResponse();
string originalFileName = response.Headers["Content-Disposition"];
Stream streamWithFileBody = response.GetResponseStream();
I'm uploading a file using the UploadFile method on the WebClient object. When the file is uploaded I would like to get a confirmation and according to MSDN (and also here on stackoverflow: Should I check the response of WebClient.UploadFile to know if the upload was successful?) I should be able to read the returned byte array but that is always empty.
Am I doing something the wrong way?
WebClient FtpClient = new WebClient();
FtpClient.Credentials = new NetworkCredential("test", "test");
byte[] responseArray = FtpClient.UploadFile("ftp://localhost/Sample.rpt", #"C:\Test\Sample.rpt");
string s = System.Text.Encoding.ASCII.GetString(responseArray);
Console.WriteLine(s); //Empty string
Or is it always successful if it doesn't return an exception?
Answer to myself: I couldn't make any sense of it so i switched to edt ftp (http://www.enterprisedt.com/) instead.