I would like to be able to do this:
Response.WriteFile ("http://domain/filepath/file.mpg")
But, I get this error:
Invalid path for MapPath 'http://domain/filepath/file.mpg'
A virtual path is expected.
The WriteFile method does not appear to work with URLs. Is there any other way I can write the contents of a URL to my page?
Thanks.
If you need the code to work in that manner, then you will have to dynamically download it onto your server first:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://domain/filepath/file.mpg");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream file = response.GetResponseStream();
From that point, you have the contents of the file as a stream, and you will have to read/write the bytes out to the response.
I will mention however, that this is not necessarily optimal--you will be killing your bandwidth, because every file will be using far more resources than necessary.
If possible, move the file to your server, or rethink exactly what you are trying to do.
A possible solution would be to simply use:
Response.Redirect("http://domain/filepath/file.mpg")
But then, I am not sure if that is what you are really trying to do or not.
Basically you have a couple of choices. You can either download the file to your server and serve it with Response.WriteFile or you could redirect to the actual location. If the file is already on your server you just have to provide a file system path to Response.WriteFile instead of the url, or use a virtual url, by removing http://domain.
Related
I am trying to simulate a real web browser request and turns out when I use this code:
WebClient client = new WebClient();
client.DownloadFile(address, localFilename);
I get only the GET to the address(of course) and the behavior in a browser is many GET requests to images, blogger, etc...
Is there a shortcut to get/simulate the same behavior or the only alternative is to parse the file/string and make all these requests by myself manually?
Yes, a browser processes the specific type of file (typically HTML) and it parses it. Depending on what the file contains (links to other files like images, etc.) the browser will then start up many other connections to get all those other files to display within the browser.
That doesn't come for free--you have to do that yourself. DownloadFile just downloads a file, that may or may not be an HTML file and thus it doesn't handled all possible file types and process all linked files.
i want to read web directory files.
when i use this code.
string[] files;
string webfilepath = "http://www.anydomain.com/templates/images";
files = Directory.GetFiles(webfilepath, #"*.*", SearchOption.TopDirectoryOnly);
code shows error url format not support.is there any other way to read web directory.
thanks in advance.
is there any other way to read web directory
No. HTTP has no "directory index" you can retrieve, unless the server (or software running on it) generates it itself, for example Apache's Options +Indexes configuration. But then that index is generated in HTML, which you'll have to parse to get the full filename.
you cant do this in a generic way. Because its up to the web server how it dump the directory content on to client if it does that at all.
First you need to make sure that your server send out the content using some protocol. then you can use HttpWebRequest to send a HTTP request and get the result. you will have to do your own parsing on the result at the end of the day.
You are trying to pass url to Directory.GetFiles instead of passing physical path of folder, You can use Server.MapPath to get the physical path of url if it is accessible i.e. the code for accessing folder is running on the machine url is point to. If url is on different machine then you can not use Directory.GetFiles.
We're hosting SSIS reports on our servers and we are storing their paths in a sql server table. From .Net, I want to be able to make sure the path entered is correct.
This should return "true" because there's a report there, for example:
http://server/Reports/Pages/Viewer.aspx?%2fShopFloor%2fProduction+by+Turn&rs:Command=Render
This should return false.
http://I am an invalid location/I am laughing at you/now I'm UEEing
I was looking at WebRequests, but I don't know if that's the route I should be taking.
Any help would be appreciated. Thanks!
You can try making a HEAD request to validate that the resource exists. With a HEAD request, you would only need the HTTP Code (200 = Success, 404 = Not Found) without consuming resources or excess memory to download the entire resource. Take a look at HttpWebRequest class for performing the actual request.
Do a http HEAD request to the URL, which should just fetch headers. This way you dont need to download the entire page if it exists. from the returned headers(if there are any) you should be able to determine if its a correct URL or not.
I'm unsure of the exact layout of your network, but if the .net application has visibility to the location the reports are stored, you could use File.Exists(). Otherwise, mellamokb and red-X have the right idea.
Is it possible to get file properties of a web file. ie: PDF file and get the date and time the PDF file was modified. Thanks.
You're probably looking for the HttpWebResponse.LastModified property.
No, you can't. It's not a file, it's a resource.
When you request it from the server you get a response back. In this case the server uses a file as source for the resource, but it could just as well be something that is just created on the fly. You only get the information that the server chooses to put in the HTML header.
You might however be able to get some more information if the server returns directory information. Then you could request the folder and get a listing back, from which you could parse some information. Note however that it's not certain that the server actually uses the files when you request something from the directory, it could still return something completely different. Also, the directory listing is also a resource, so there is no guarantee that it's relevant either.
How can I programmatically tell if a binary file on a website (e.g. image) has changed without downloading it? Is there a way using HTTP methods (in C# in this case) to check prior to fully downloading it?
Really, you want to look for the Last-Modified header after issuing a HEAD request (rather than a GET). I wrote some code to get the HEAD via WebClient here.
You can check that whether the file is changed or not by requesting with HEAD.
Then, returned response header may include Last-Modified, or ETag if the web server support.
You can do a HEAD request and check the last-modified datetime value, as well as the content-length.