I use webrequest in asp.net page (run on server) to get google drive direct download link. After that, I send this link to client ( they use browser ex: chrome, firefox) for download this file. But issuie is cookie, webrequest cookie and client cookie not the same same. Ofcourse they can't download.
So, how I can set webrequest cookie to client?
Or how make webrequest cookie and client cookie to the same same? Thank and hope some idea to solve this problem. This my code:
HttpWebRequest objWebRequest = (HttpWebRequest)WebRequest.Create(url);
objWebRequest.CookieContainer = cookies;
objWebRequest.Credentials = CredentialCache.DefaultCredentials;
HttpWebResponse objWebResponse = (HttpWebResponse)objWebRequest.GetResponse();
Stream receiveStream = objWebResponse.GetResponseStream();
StreamReader readStream = new StreamReader(receiveStream, System.Text.Encoding.UTF8);
HtmlAgilityPack.HtmlDocument doc = new HtmlDocument();
doc.Load(readStream);
string link = ""; //=> this link i need to send client
foreach (HtmlNode row in doc.DocumentNode.SelectNodes("//a[#id='uc-download-link']"))
{ link += row.Attributes["href"].Value; }
Related
I have a Request which I make to a page and works fine. I can also view that page the response page with Fiddler.
But how do I open this response in my browser?
Currently what I have:
Cookie cookie = new Cookie("test","this");
cookie.Domain = "foobar";
HttpWebRequest request = (HttpWebRequest) HttpWebRequest.Create("http://foobar/ReportServer/");
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add(cookie);
WebResponse response = request.GetResponse();
Stream sr = response.GetResponseStream();
StreamReader sre = new StreamReader(sr);
string s = sre.ReadToEnd();
Response.Write(s);
Save it to an HTML file and open the browser with the path to that file.
Because you have addressibility to the request "Stream" you can use this method: NavigateToStream :
http://msdn.microsoft.com/en-us/library/system.windows.controls.webbrowser.navigatetostream(v=vs.100).aspx
this.webBrowser.NavigateToStream(sr);
You can use System.Windows.Forms.WebBrowser control.
I have a computer that does not access the internet without a proxy and an authentication (username and password). So I did it:
var webProxy = new WebProxy(PROXY_ADRESS, PORT);
webProxy.Credentials = new NetworkCredential(USERNAME, PASSWORD, DOMAIN);
var webRequest = (HttpWebRequest)WebRequest.Create("https://www.google.com.br/");
webRequest.Proxy = webProxy;
HttpWebResponse response = (HttpWebResponse)webRequest.GetResponse();
Stream receiveStream = response.GetResponseStream();
webBrowser1.DocumentStream = receiveStream;
When I execute this code, the web browser loads (with some scripts errors) the Google page, but it does not load some images and when I click in Search or any other button the page gets white with a text with the end of the URL, for example: /search.
How can I navigate to other pages and load the full page (including the images)?
I've followed this tutorial and now it's working:
http://www.journeyintocode.com/2013/08/c-webbrowser-control-proxy.html
I have created an ASP.NET MVC View. On my MVC WebApp, it works great.
I would like to be able to (from a console app) render the View out as an HTML Email. I'm wondering what the best way to do this is going to be, the part I'm struggling with is Rendering the View.
Is there any way to do this from a console application?
The webapp simply calls a web-service and formats the data nicely so the console application will have access to the same web-service; however, the ActionResult on the controller is protected by [Authorize] attributes, so not just anyone can get at it.
Yes you can. I assume you are using forms authentication. Just authenticate, grab the session header cookie and copy it to your new web request.
I ended up using HttpWebRequest and the info provided here: http://odetocode.com/articles/162.aspx
From the article:
// first, request the login form to get the viewstate value
HttpWebRequest webRequest = WebRequest.Create(LOGIN_URL) as HttpWebRequest;
StreamReader responseReader = new StreamReader(
webRequest.GetResponse().GetResponseStream()
);
string responseData = responseReader.ReadToEnd();
responseReader.Close();
// extract the viewstate value and build out POST data
string viewState = ExtractViewState(responseData);
string postData =
String.Format(
"__VIEWSTATE={0}&UsernameTextBox={1}&PasswordTextBox={2}&LoginButton=Login",
viewState, USERNAME, PASSWORD
);
// have a cookie container ready to receive the forms auth cookie
CookieContainer cookies = new CookieContainer();
// now post to the login form
webRequest = WebRequest.Create(LOGIN_URL) as HttpWebRequest;
webRequest.Method = "POST";
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.CookieContainer = cookies;
// write the form values into the request message
StreamWriter requestWriter = new StreamWriter(webRequest.GetRequestStream());
requestWriter.Write(postData);
requestWriter.Close();
// we don't need the contents of the response, just the cookie it issues
webRequest.GetResponse().Close();
// now we can send out cookie along with a request for the protected page
webRequest = WebRequest.Create(SECRET_PAGE_URL) as HttpWebRequest;
webRequest.CookieContainer = cookies;
responseReader = new StreamReader(webRequest.GetResponse().GetResponseStream());
// and read the response
responseData = responseReader.ReadToEnd();
responseReader.Close();
Response.Write(responseData);
How do you login to a webpage and retrieve its content in C#?
That depends on what's required to log in. You could use a webclient to send the login credentials to the server's login page (via whatever method is required, GET or POST), but that wouldn't persist a cookie. There is a way to get a webclient to handle cookies, so you could just POST the login info to the server, then request the page you want with the same webclient, then do whatever you want with the page.
Look at System.Net.WebClient, or for more advanced requirements System.Net.HttpWebRequest/System.Net.HttpWebResponse.
As for actually applying these: you'll have to study the html source of each page you want to scrape in order to learn exactly what Http requests it's expecting.
How do you mean "login"?
If the subfolder is protected on the OS level, and the browser pops of a login dialog when you go there, you will need to set the Credentials property on the HttpWebRequest.
If the website has it's own cookie-based membership/login system, you will have to use HttpWebRequest to first response to the login form.
string postData = "userid=ducon";
postData += "&username=camarche" ;
byte[] data = Encoding.ASCII.GetBytes(postData);
WebRequest req = WebRequest.Create(
URL);
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ContentLength = data.Length;
Stream newStream = req.GetRequestStream();
newStream.Write(data, 0, data.Length);
newStream.Close();
StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream(), System.Text.Encoding.GetEncoding("iso-8859-1"));
string coco = reader.ReadToEnd();
Use the WebClient class.
Dim Html As String
Using Client As New System.Net.WebClient()
Html = Client.DownloadString("http://www.google.com")
End Using
You can use the build in WebClient Object instead of crating the request yourself.
WebClient wc = new WebClient();
wc.Credentials = new NetworkCredential("username", "password");
string url = "http://foo.com";
try
{
using (Stream stream = wc.OpenRead(new Uri(url)))
{
using (StreamReader reader = new StreamReader(stream))
{
return reader.ReadToEnd();
}
}
}
catch (WebException e)
{
//Error handeling
}
Try this:
public string GetContent(string url)
{
using (System.Net.WebClient client =new System.Net.WebClient())
{
return client.DownloadString(url);
}
}
I have an application that reads parts of the source code on a website. That all works; but the problem is that the page in question requires the user to be logged in to access this source code. What my program needs a way to initially log the user into the website- after that is done, I'll be able to access and read the source code.
The website that needs to be logged into is:
mmoinn.com/index.do?PageModule=UsersLogin
You can continue using WebClient to POST (instead of GET, which is the HTTP verb you're currently using with DownloadString), but I think you'll find it easier to work with the (slightly) lower-level classes WebRequest and WebResponse.
There are two parts to this - the first is to post the login form, the second is recovering the "Set-cookie" header and sending that back to the server as "Cookie" along with your GET request. The server will use this cookie to identify you from now on (assuming it's using cookie-based authentication which I'm fairly confident it is as that page returns a Set-cookie header which includes "PHPSESSID").
POSTing to the login form
Form posts are easy to simulate, it's just a case of formatting your post data as follows:
field1=value1&field2=value2
Using WebRequest and code I adapted from Scott Hanselman, here's how you'd POST form data to your login form:
string formUrl = "http://www.mmoinn.com/index.do?PageModule=UsersAction&Action=UsersLogin"; // NOTE: This is the URL the form POSTs to, not the URL of the form (you can find this in the "action" attribute of the HTML's form tag
string formParams = string.Format("email_address={0}&password={1}", "your email", "your password");
string cookieHeader;
WebRequest req = WebRequest.Create(formUrl);
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
byte[] bytes = Encoding.ASCII.GetBytes(formParams);
req.ContentLength = bytes.Length;
using (Stream os = req.GetRequestStream())
{
os.Write(bytes, 0, bytes.Length);
}
WebResponse resp = req.GetResponse();
cookieHeader = resp.Headers["Set-cookie"];
Here's an example of what you should see in the Set-cookie header for your login form:
PHPSESSID=c4812cffcf2c45e0357a5a93c137642e; path=/; domain=.mmoinn.com,wowmine_referer=directenter; path=/; domain=.mmoinn.com,lang=en; path=/;domain=.mmoinn.com,adt_usertype=other,adt_host=-
GETting the page behind the login form
Now you can perform your GET request to a page that you need to be logged in for.
string pageSource;
string getUrl = "the url of the page behind the login";
WebRequest getRequest = WebRequest.Create(getUrl);
getRequest.Headers.Add("Cookie", cookieHeader);
WebResponse getResponse = getRequest.GetResponse();
using (StreamReader sr = new StreamReader(getResponse.GetResponseStream()))
{
pageSource = sr.ReadToEnd();
}
EDIT:
If you need to view the results of the first POST, you can recover the HTML it returned with:
using (StreamReader sr = new StreamReader(resp.GetResponseStream()))
{
pageSource = sr.ReadToEnd();
}
Place this directly below cookieHeader = resp.Headers["Set-cookie"]; and then inspect the string held in pageSource.
You can simplify things quite a bit by creating a class that derives from WebClient, overriding its GetWebRequest method and setting a CookieContainer object on it. If you always set the same CookieContainer instance, then cookie management will be handled automatically for you.
But the only way to get at the HttpWebRequest before it is sent is to inherit from WebClient and override that method.
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookie = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookie;
}
return request;
}
}
var client = new CookieAwareWebClient();
client.BaseAddress = #"https://www.site.com/any/base/url/";
var loginData = new NameValueCollection();
loginData.Add("login", "YourLogin");
loginData.Add("password", "YourPassword");
client.UploadValues("login.php", "POST", loginData);
//Now you are logged in and can request pages
string htmlSource = client.DownloadString("index.php");
Matthew Brindley, your code worked very good for some website I needed (with login), but I needed to change to HttpWebRequest and HttpWebResponse otherwise I get a 404 Bad Request from the remote server. Also I would like to share my workaround using your code, and is that I tried it to login to a website based on moodle, but it didn't work at your step "GETting the page behind the login form" because when successfully POSTing the login, the Header 'Set-Cookie' didn't return anything despite other websites does.
So I think this where we need to store cookies for next Requests, so I added this.
To the "POSTing to the login form" code block :
var cookies = new CookieContainer();
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(formUrl);
req.CookieContainer = cookies;
And To the "GETting the page behind the login form" :
HttpWebRequest getRequest = (HttpWebRequest)WebRequest.Create(getUrl);
getRequest.CookieContainer = new CookieContainer();
getRequest.CookieContainer.Add(resp.Cookies);
getRequest.Headers.Add("Cookie", cookieHeader);
Doing this, lets me Log me in and get the source code of the "page behind login" (website based moodle) I know this is a vague use of the CookieContainer and HTTPCookies because we may ask first is there a previously set of cookies saved before sending the request to the server. This works without problem anyway, but here's a good info to read about WebRequest and WebResponse with sample projects and tutorial:
Retrieving HTTP content in .NET
How to use HttpWebRequest and HttpWebResponse in .NET
Sometimes, it may help switching off AllowAutoRedirect and setting both login POST and page GET requests the same user agent.
request.UserAgent = userAgent;
request.AllowAutoRedirect = false;