I have created a Simple Aspx Page to Log the Output of a Process running on a Remote Machine.
My url is:.com/Log.ashx?Data=SomeString
Should I use WebRequest or a WebClient?
What is more efficient and less resource intensive?
I would need to do this about 20 times in a minute. The ashx file does not show any output.
Neither one is more efficient than the other. WebRequest just offers you more functonality than WebClient. if WebClient exposes enough functionality for what you need than go with that. If it doesn't then use WebRequest.
Related
I am trying to simulate a real web browser request and turns out when I use this code:
WebClient client = new WebClient();
client.DownloadFile(address, localFilename);
I get only the GET to the address(of course) and the behavior in a browser is many GET requests to images, blogger, etc...
Is there a shortcut to get/simulate the same behavior or the only alternative is to parse the file/string and make all these requests by myself manually?
Yes, a browser processes the specific type of file (typically HTML) and it parses it. Depending on what the file contains (links to other files like images, etc.) the browser will then start up many other connections to get all those other files to display within the browser.
That doesn't come for free--you have to do that yourself. DownloadFile just downloads a file, that may or may not be an HTML file and thus it doesn't handled all possible file types and process all linked files.
I'm trying to get the list of requests that occur within the confines of a single httpwebrequest from my aspx page.
When using fiddler, you request a page from IE. While doing that request the page requests x number of other files as part of the request. Fiddler shows you that you are getting a .css file, a .js file and maybe its's also requesting a couple other pages from that page before it renders.
I want to be able to make the httpwebrequest from my aspx page then monitor (or list out) the URLs that are being called within that request.
That said I am open to alternate ways to do the request. e.g. IFRAME, etc.
Maybe this just can't be done from an aspx page. Ideas?
If you are using an HttpWebRequest on the server, it is not going to download all of the other embedded resources. If you want to get the list of resources used on the page, you'll have to parse the HTML yourself.
Here's a related questions that might be useful: How can I use HTML Agility Pack to retrieve all the images from a website?
This cannot be done from ASPX page. I think you should hook on one of the Global ASAX events (via writing custom HttpModule) and intercept the requests there.
I have a C# client which once every hour needs to post some zip files to ASP.Net site. This needs to be completely automated with no user interaction.
Wondering the best way to go about it.
Ideally would like to post the file without setting up any non .aspx / .asp pages.
Thanks for the help!
It depends on what the target site expects as content type. If it is multipart/form-data then a simple WebClient should do the job:
using (var client = new WebClient())
{
byte[] result = client.UploadFile(
"http://foo.com/index.aspx", #"d:\foo\bar.zip"
);
// TODO: Handle the server response if necessary
}
Send a HttpRequest containing all the necessary information including the bytes of the file. Google should help you on this one.
Nevertheless, I don't understand why you don't want to use a non .aspx page for this. A generic handle (.ashx) is suitable for this. But I still suggest you use another way to upload that file, e.g. per FTP and use a service that watches the directoy with a FileWatcher to determine and act on changes
In order to automate the task, you can use a DispatcherTimer (http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatchertimer.aspx), assigning a handler to the Tick event.
I have an idea for an App that would really help me out in work but I'm not sure if it's possible.
I want to run a C# desktop application that will ask for a value. When a value is supplied, the application will open a browswer, go to a webpage and add the value into a form on an online website. The form is then submitted and a new page is loaded that contains a table of results. I then want to extract the table of results from the page source and write code to parse the result values.
It is not important that the user see's this happen in an actual browser. In other words if there's a way to do it by reading HTTP requests then thats great.
The biggest problem I have is getting the values into the form and then retrieving the page source after the form is submitted and the next page loads.
Any help really appreciated.
Thanks
Provided that you're only using this in a legal context:
Usually, web forms are sent via POST request to the web server, specifically some script that handles it. You can look at the HTML code for the form's page and find out the destination for the form (form's action).
You can then use a HttpWebRequest in C# to "pretend you are the form", sending a POST request with all the required parameters (adding them to the HTTP header).
As a result you will get the source code of the destination page as it would be sent to the browser. You can parse this.
This is definitely possible and you don't need to use an actual web browser for this. You can simply use a System.Net.WebClient to send your HTTP request and get an HTTP response.
I suggest to use wireshark (or you can use Firefox + Firebug) it allows you to see HTTP requests and responses. By looking at the HTTP traffic you can see exactly how you should pass your HTTP request and which parameters you should be setting.
You don't need to involve the browser with this. WebClient should do all that you require. You'll need to see what's actually being posted when you submit the form with the browser, and then you should be able to make a POST request using the WebClient and retrieve the resulting page as a string.
The docs for the WebClient constructor have a nice example.
See e.g. this question for some pointers on at least the data retrieval side. You're going to know a lot more about the http protocol before you're done with this...
Why would you do this through web pages if you don't even want the user to do anything?
Web pages are purely for interaction with users, if you simply want data transfer, use WCF.
#Brian using Wireshark will result in a very angry network manager, make sure you are actually allowed to use it.
I'm having an issue within my application Pelotonics. When a user downloads a file the system seems to block all incoming requests until that file is done downloading. What is the proper technique to to open a download dialog box (standard from the browser), let the user start downloading the file, then while the file is downloading, let the user continue throughout the application.
The way we're getting the file from the server is we have a separate ASPX page that get's passed in a value through the query string, then retrieves the stream of the file from the server, then I add the "content-disposition" header to the Response and then loop through the file's stream and read 2KB chunks out to the response.outputstream. Then once that's done I do a Response.End.
Watch this for a quick screencast on the issue:
http://www.screencast.com/users/PeloCast/folders/Jing/media/8bb4b1dd-ac66-4f84-a1a3-7fc64cd650c0
by the way, we're in ASP.NET and C#...
Thanks!!!
Daniel
I think ASP.NET allows one simultaneous page execution per session and I'm not aware of any way to configure this otherwise.
This is not a very pretty workaround, but it might help if you rewrote ASP.NET_SESSIONID value to the request cookie in Application_BeginRequest (in global.asax). Of course, you would need to the authentication some other way. I haven't tried this, though.
Another way would be launching a separate thread for the download process, but you would need to find a way how this can be done without the worker thread closing it's resources.
May I ask, is there a reason why don't you just use HttpResponse.TransmitFile?