How convert CookieContainer to string? - c#

I try loop cookie container with loop, but its return error. How correct convert CookieContainer to string
foreach (Cookie item in cookieContainer)
{
var data = item.Value + "=" + item.Name;
}
Error 2 The foreach statement can not be used for variables of type
"System.Net.CookieContainer",

If you are only interested in the cookies for a specific domain, then you can iterate using the GetCookies() method.
var cookieContainer = new CookieContainer();
var testCookie = new Cookie("test", "testValue");
var uri = new Uri("https://www.google.com");
cookieContainer.Add(uri, testCookie);
foreach (var cookie in cookieContainer.GetCookies(uri))
{
Console.WriteLine(cookie.ToString()); // test=testValue
}
If your interested in getting all the cookies, then you may need to use reflection as provided by this answer.

Sample :
public static void Main(string[] args)
{
if (args == null || args.Length != 1)
{
Console.WriteLine("Specify the URL to receive the request.");
Environment.Exit(1);
}
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(args[0]);
request.CookieContainer = new CookieContainer();
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
// Print the properties of each cookie.
foreach (Cookie cook in response.Cookies)
{
// Show the string representation of the cookie.
Console.WriteLine ("String: {0}", cook.ToString());
}
}

Related

Windows Forms Bad Request when Trying to get Response from website

I have a list of URLs, and the meaning of this is that I am checking our websites if anyone is down / offline we would get a notification and that works except some of the URLs crash at this line
HttpWebResponse httpRes = (HttpWebResponse)httpReq.GetResponse();
But the rest is working just fine? can anyone tell me what I'm doing wrong? I've tried URLs with HTTPS, HTTP and even with only www...
public void CheckUrl()//List Of URLs
{
List<string> urls = new List<string>() {
"https//:www.example.com/something1/buy",
"https//:www.example.com/something2/buy",
"https//:www.example.com/something3/buy",
"https//:www.example.com/something4/buy",
};
//walks through all the URL:s
foreach (var url in urls)
{
//Creating URL Request
HttpWebRequest httpReq = (HttpWebRequest)WebRequest.Create(url);
httpReq.AllowAutoRedirect = false;
try
{
WebClient client = new WebClient();
string downloadString = client.DownloadString(url);
//Trying to find a response
HttpWebResponse httpRes = (HttpWebResponse)httpReq.GetResponse();
if ((httpRes.StatusCode != HttpStatusCode.OK || httpRes.StatusCode != HttpStatusCode.Found) && downloadString.Contains("404 -") || downloadString.Contains("Server Error"))
{
// Code for NotFound resources goes here.
SendVerificationLinkEmail(httpRes.StatusCode.ToString(), url);
foreach (var number in Numbers)
{
SendSms(url, number);
}
}
//Close the response.
httpRes.Close();
}
catch(Exception e)
{
//sending only to admin to check it out first
SendExeptionUrl(url);
foreach (var number in Numbers)
{
SendSms(url, number);
}
}
}
Application.Exit();
}

Forward HTTP request to another server

I have the following code that receives webhook messages:
// Read posted data
string requestBody;
using (var reader = new StreamReader(HttpContext.Current.Request.InputStream))
{
requestBody = reader.ReadToEnd();
}
requestBody.Log();
// Attempt to forward request
context.CopyTo(Settings.Payments.Paypal.ScirraPaypalIPNEndpoint);
requestBody contains data which is logged. I then attempt to forward the request to another URL:
public static void CopyTo(this HttpContext source, string url)
{
var destination = (HttpWebRequest) WebRequest.Create(url);
var request = source.Request;
destination.Method = request.HttpMethod;
// Copy unrestricted headers
foreach (var headerKey in request.Headers.AllKeys)
{
if (WebHeaderCollection.IsRestricted(headerKey)) continue;
destination.Headers[headerKey] = request.Headers[headerKey];
}
// Copy restricted headers
if (request.AcceptTypes != null && request.AcceptTypes.Any())
{
destination.Accept = string.Join(",", request.AcceptTypes);
}
destination.ContentType = request.ContentType;
destination.Referer = request.UrlReferrer?.AbsoluteUri ?? string.Empty;
destination.UserAgent = request.UserAgent;
// Copy content (if content body is allowed)
if (request.HttpMethod != "GET"
&& request.HttpMethod != "HEAD"
&& request.ContentLength > 0)
{
using (var destinationStream = destination.GetRequestStream())
{
request.InputStream.Position = 0;
request.InputStream.CopyTo(destinationStream);
destinationStream.Close();
}
}
if (!Settings.Deployment.IsLive)
{
ServicePointManager.ServerCertificateValidationCallback =
(sender, certificate, chain, sslPolicyErrors) => true;
}
using (var response = destination.GetResponse() as HttpWebResponse)
{
if (response == null) throw new Exception("Failed to post to " + url);
}
}
The handler that receives this forwarded request has the code:
public void ProcessRequest(HttpContext context)
{
string requestBody;
using (var reader = new StreamReader(HttpContext.Current.Request.InputStream))
{
requestBody = reader.ReadToEnd();
}
requestBody.Log();
}
However on the handler forwarded to, requestBody is always empty! What am I doing wrong here?
Both servers are hosted in Clouflare, when posting from one to the other I get a CF 1000 prohibited IP error.
Solution is to add target servers IP address into requesting servers hosts file.

How can I get html from page with cloudflare ddos portection?

I use htmlagility to get webpage data but I tried everything with page using www.cloudflare.com protection for ddos. The redirect page is not possible to handle in htmlagility because they don't redirect with meta nor js I guess, they check if you have already being checked with a cookie that I failed to simulate with c#. When I get the page, the html code is from the landing cloadflare page.
I also encountered this problem some time ago. The real solution would be solve the challenge the cloudflare websites gives you (you need to compute a correct answer using javascript, send it back, and then you receive a cookie / your token with which you can continue to view the website). So all you would get normally is a page like
In the end, I just called a python-script with a shell-execute. I used the modules provided within this github fork. This could serve as a starting point to implement the circumvention of the cloudflare anti-dDoS page in C# aswell.
FYI, the python script I wrote for my personal usage just wrote the cookie in a file. I read that later again using C# and store it in a CookieJar to continue browsing the page within C#.
#!/usr/bin/env python
import cfscrape
import sys
scraper = cfscrape.create_scraper() # returns a requests.Session object
fd = open("cookie.txt", "w")
c = cfscrape.get_cookie_string(sys.argv[1])
fd.write(str(c))
fd.close()
print(c)
EDIT: To repeat this, this has only LITTLE to do with cookies! Cloudflare forces you to solve a REAL challenge using javascript commands. It's not as easy as accepting a cookie and using it later on. Look at https://github.com/Anorov/cloudflare-scrape/blob/master/cfscrape/init.py and the ~40 lines of javascript emulation to solve the challenge.
Edit2: Instead of writing something to circumvent the protection, I've also seen people using a fully-fledged browser-object (this is not a headless browser) to go to the website and subscribe to certain events when the page is loaded. Use the WebBrowser class to create an infinetly small browser window and subscribe to the appropiate events.
Edit3:
Alright, I actually implemented the C# way to do this. This uses the JavaScript Engine Jint for .NET, available via https://www.nuget.org/packages/Jint
The cookie-handling code is ugly because sometimes the HttpResponse class won't pick up the cookies, although the header contains a Set-Cookie section.
using System;
using System.Net;
using System.IO;
using System.Text.RegularExpressions;
using System.Web;
using System.Collections;
using System.Threading;
namespace Cloudflare_Evader
{
public class CloudflareEvader
{
/// <summary>
/// Tries to return a webclient with the neccessary cookies installed to do requests for a cloudflare protected website.
/// </summary>
/// <param name="url">The page which is behind cloudflare's anti-dDoS protection</param>
/// <returns>A WebClient object or null on failure</returns>
public static WebClient CreateBypassedWebClient(string url)
{
var JSEngine = new Jint.Engine(); //Use this JavaScript engine to compute the result.
//Download the original page
var uri = new Uri(url);
HttpWebRequest req =(HttpWebRequest) WebRequest.Create(url);
req.UserAgent = "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0";
//Try to make the usual request first. If this fails with a 503, the page is behind cloudflare.
try
{
var res = req.GetResponse();
string html = "";
using (var reader = new StreamReader(res.GetResponseStream()))
html = reader.ReadToEnd();
return new WebClient();
}
catch (WebException ex) //We usually get this because of a 503 service not available.
{
string html = "";
using (var reader = new StreamReader(ex.Response.GetResponseStream()))
html = reader.ReadToEnd();
//If we get on the landing page, Cloudflare gives us a User-ID token with the cookie. We need to save that and use it in the next request.
var cookie_container = new CookieContainer();
//using a custom function because ex.Response.Cookies returns an empty set ALTHOUGH cookies were sent back.
var initial_cookies = GetAllCookiesFromHeader(ex.Response.Headers["Set-Cookie"], uri.Host);
foreach (Cookie init_cookie in initial_cookies)
cookie_container.Add(init_cookie);
/* solve the actual challenge with a bunch of RegEx's. Copy-Pasted from the python scrapper version.*/
var challenge = Regex.Match(html, "name=\"jschl_vc\" value=\"(\\w+)\"").Groups[1].Value;
var challenge_pass = Regex.Match(html, "name=\"pass\" value=\"(.+?)\"").Groups[1].Value;
var builder = Regex.Match(html, #"setTimeout\(function\(\){\s+(var t,r,a,f.+?\r?\n[\s\S]+?a\.value =.+?)\r?\n").Groups[1].Value;
builder = Regex.Replace(builder, #"a\.value =(.+?) \+ .+?;", "$1");
builder = Regex.Replace(builder, #"\s{3,}[a-z](?: = |\.).+", "");
//Format the javascript..
builder = Regex.Replace(builder, #"[\n\\']", "");
//Execute it.
long solved = long.Parse(JSEngine.Execute(builder).GetCompletionValue().ToObject().ToString());
solved += uri.Host.Length; //add the length of the domain to it.
Console.WriteLine("***** SOLVED CHALLENGE ******: " + solved);
Thread.Sleep(3000); //This sleeping IS requiered or cloudflare will not give you the token!!
//Retreive the cookies. Prepare the URL for cookie exfiltration.
string cookie_url = string.Format("{0}://{1}/cdn-cgi/l/chk_jschl", uri.Scheme, uri.Host);
var uri_builder = new UriBuilder(cookie_url);
var query = HttpUtility.ParseQueryString(uri_builder.Query);
//Add our answers to the GET query
query["jschl_vc"] = challenge;
query["jschl_answer"] = solved.ToString();
query["pass"] = challenge_pass;
uri_builder.Query = query.ToString();
//Create the actual request to get the security clearance cookie
HttpWebRequest cookie_req = (HttpWebRequest) WebRequest.Create(uri_builder.Uri);
cookie_req.AllowAutoRedirect = false;
cookie_req.CookieContainer = cookie_container;
cookie_req.Referer = url;
cookie_req.UserAgent = "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0";
//We assume that this request goes through well, so no try-catch
var cookie_resp = (HttpWebResponse)cookie_req.GetResponse();
//The response *should* contain the security clearance cookie!
if (cookie_resp.Cookies.Count != 0) //first check if the HttpWebResponse has picked up the cookie.
foreach (Cookie cookie in cookie_resp.Cookies)
cookie_container.Add(cookie);
else //otherwise, use the custom function again
{
//the cookie we *hopefully* received here is the cloudflare security clearance token.
if (cookie_resp.Headers["Set-Cookie"] != null)
{
var cookies_parsed = GetAllCookiesFromHeader(cookie_resp.Headers["Set-Cookie"], uri.Host);
foreach (Cookie cookie in cookies_parsed)
cookie_container.Add(cookie);
}
else
{
//No security clearence? something went wrong.. return null.
//Console.WriteLine("MASSIVE ERROR: COULDN'T GET CLOUDFLARE CLEARANCE!");
return null;
}
}
//Create a custom webclient with the two cookies we already acquired.
WebClient modedWebClient = new WebClientEx(cookie_container);
modedWebClient.Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0");
modedWebClient.Headers.Add("Referer", url);
return modedWebClient;
}
}
/* Credit goes to https://stackoverflow.com/questions/15103513/httpwebresponse-cookies-empty-despite-set-cookie-header-no-redirect
(user https://stackoverflow.com/users/541404/cameron-tinker) for these functions
*/
public static CookieCollection GetAllCookiesFromHeader(string strHeader, string strHost)
{
ArrayList al = new ArrayList();
CookieCollection cc = new CookieCollection();
if (strHeader != string.Empty)
{
al = ConvertCookieHeaderToArrayList(strHeader);
cc = ConvertCookieArraysToCookieCollection(al, strHost);
}
return cc;
}
private static ArrayList ConvertCookieHeaderToArrayList(string strCookHeader)
{
strCookHeader = strCookHeader.Replace("\r", "");
strCookHeader = strCookHeader.Replace("\n", "");
string[] strCookTemp = strCookHeader.Split(',');
ArrayList al = new ArrayList();
int i = 0;
int n = strCookTemp.Length;
while (i < n)
{
if (strCookTemp[i].IndexOf("expires=", StringComparison.OrdinalIgnoreCase) > 0)
{
al.Add(strCookTemp[i] + "," + strCookTemp[i + 1]);
i = i + 1;
}
else
al.Add(strCookTemp[i]);
i = i + 1;
}
return al;
}
private static CookieCollection ConvertCookieArraysToCookieCollection(ArrayList al, string strHost)
{
CookieCollection cc = new CookieCollection();
int alcount = al.Count;
string strEachCook;
string[] strEachCookParts;
for (int i = 0; i < alcount; i++)
{
strEachCook = al[i].ToString();
strEachCookParts = strEachCook.Split(';');
int intEachCookPartsCount = strEachCookParts.Length;
string strCNameAndCValue = string.Empty;
string strPNameAndPValue = string.Empty;
string strDNameAndDValue = string.Empty;
string[] NameValuePairTemp;
Cookie cookTemp = new Cookie();
for (int j = 0; j < intEachCookPartsCount; j++)
{
if (j == 0)
{
strCNameAndCValue = strEachCookParts[j];
if (strCNameAndCValue != string.Empty)
{
int firstEqual = strCNameAndCValue.IndexOf("=");
string firstName = strCNameAndCValue.Substring(0, firstEqual);
string allValue = strCNameAndCValue.Substring(firstEqual + 1, strCNameAndCValue.Length - (firstEqual + 1));
cookTemp.Name = firstName;
cookTemp.Value = allValue;
}
continue;
}
if (strEachCookParts[j].IndexOf("path", StringComparison.OrdinalIgnoreCase) >= 0)
{
strPNameAndPValue = strEachCookParts[j];
if (strPNameAndPValue != string.Empty)
{
NameValuePairTemp = strPNameAndPValue.Split('=');
if (NameValuePairTemp[1] != string.Empty)
cookTemp.Path = NameValuePairTemp[1];
else
cookTemp.Path = "/";
}
continue;
}
if (strEachCookParts[j].IndexOf("domain", StringComparison.OrdinalIgnoreCase) >= 0)
{
strPNameAndPValue = strEachCookParts[j];
if (strPNameAndPValue != string.Empty)
{
NameValuePairTemp = strPNameAndPValue.Split('=');
if (NameValuePairTemp[1] != string.Empty)
cookTemp.Domain = NameValuePairTemp[1];
else
cookTemp.Domain = strHost;
}
continue;
}
}
if (cookTemp.Path == string.Empty)
cookTemp.Path = "/";
if (cookTemp.Domain == string.Empty)
cookTemp.Domain = strHost;
cc.Add(cookTemp);
}
return cc;
}
}
/*Credit goes to https://stackoverflow.com/questions/1777221/using-cookiecontainer-with-webclient-class
(user https://stackoverflow.com/users/129124/pavel-savara) */
public class WebClientEx : WebClient
{
public WebClientEx(CookieContainer container)
{
this.container = container;
}
public CookieContainer CookieContainer
{
get { return container; }
set { container = value; }
}
private CookieContainer container = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest r = base.GetWebRequest(address);
var request = r as HttpWebRequest;
if (request != null)
{
request.CookieContainer = container;
}
return r;
}
protected override WebResponse GetWebResponse(WebRequest request, IAsyncResult result)
{
WebResponse response = base.GetWebResponse(request, result);
ReadCookies(response);
return response;
}
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse response = base.GetWebResponse(request);
ReadCookies(response);
return response;
}
private void ReadCookies(WebResponse r)
{
var response = r as HttpWebResponse;
if (response != null)
{
CookieCollection cookies = response.Cookies;
container.Add(cookies);
}
}
}
}
The function will return a webclient with the solved challenges and cookies inside. You can use it as follows:
static void Main(string[] args)
{
WebClient client = null;
while (client == null)
{
Console.WriteLine("Trying..");
client = CloudflareEvader.CreateBypassedWebClient("http://anilinkz.tv");
}
Console.WriteLine("Solved! We're clear to go");
Console.WriteLine(client.DownloadString("http://anilinkz.tv/anime-list"));
Console.ReadLine();
}
A "simple" working method to bypass Cloudflare if you don't use libraries (that sometimes does not work).
Open a "hidden" WebBrowser (size 1,1 or so).
Open the root of your target Cloudflare site.
Get the cookies from WebBrowser.
Use these cookies in WebClient.
Make sure the UserAgent for both WebBrowser and WebClient are identical. Cloudflare will give you a 503 if a mismatch there on the WebClient aftwerwards.
You will need to search here on stack on how to get cookies from WebBrowser and how to modify WebClient so you can set its cookiecontainer + modify the UserAgent on 1 or both so they are identical.
Since the cookies from Cloudflare seems to never expire, you can then serialize the cookies to somewhere temporary and load it each time you run your app, maybe a verification and refetch if failing.
Been doing this for a while and it works quite well. Could not get the C# libs to work for a specific Cloudflare site while they worked on others. No clue to why yet.
This also works behind the scenes on an IIS server, but you will have to set up "frowned upon" settings. That is, run the app pool as SYSTEM or ADMIN and set it to Classic mode.
Nowadays' answer should include Flaresolverr project.
It is meant to be deployed as a container using Docker, so you only have to pass it a port and it's running.
It doesn't impact your project as you don't import a library. It is currently supported. The only bad point I see, is that you need to install Docker to make it work.
Use WebClient to get html of the page,
I wrote following class which handles cookies too,
Just pass CookieContainer instance in constructor.
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Text;
namespace NitinJS
{
public class SmsWebClient : WebClient
{
public SmsWebClient(CookieContainer container, Dictionary<string, string> Headers)
: this(container)
{
foreach (var keyVal in Headers)
{
this.Headers[keyVal.Key] = keyVal.Value;
}
}
public SmsWebClient(bool flgAddContentType = true)
: this(new CookieContainer(), flgAddContentType)
{
}
public SmsWebClient(CookieContainer container, bool flgAddContentType = true)
{
this.Encoding = Encoding.UTF8;
System.Net.ServicePointManager.Expect100Continue = false;
ServicePointManager.MaxServicePointIdleTime = 2000;
this.container = container;
if (flgAddContentType)
this.Headers["Content-Type"] = "application/json";//"application/x-www-form-urlencoded";
this.Headers["Accept"] = "application/json, text/javascript, */*; q=0.01";// "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
//this.Headers["Accept-Encoding"] = "gzip, deflate";
this.Headers["Accept-Language"] = "en-US,en;q=0.5";
this.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 6.1; rv:23.0) Gecko/20100101 Firefox/23.0";
this.Headers["X-Requested-With"] = "XMLHttpRequest";
//this.Headers["Connection"] = "keep-alive";
}
private readonly CookieContainer container = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest r = base.GetWebRequest(address);
var request = r as HttpWebRequest;
if (request != null)
{
request.CookieContainer = container;
request.Timeout = 3600000; //20 * 60 * 1000
}
return r;
}
protected override WebResponse GetWebResponse(WebRequest request, IAsyncResult result)
{
WebResponse response = base.GetWebResponse(request, result);
ReadCookies(response);
return response;
}
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse response = base.GetWebResponse(request);
ReadCookies(response);
return response;
}
private void ReadCookies(WebResponse r)
{
var response = r as HttpWebResponse;
if (response != null)
{
CookieCollection cookies = response.Cookies;
container.Add(cookies);
}
}
}
}
USAGE:
CookieContainer cookies = new CookieContainer();
SmsWebClient client = new SmsWebClient(cookies);
string html = client.DownloadString("http://www.google.com");

WebException when loading rss feed

I am attempting to load a page I've received from an RSS feed and I receive the following WebException:
Cannot handle redirect from HTTP/HTTPS protocols to other dissimilar ones.
with an inner exception:
Invalid URI: The hostname could not be parsed.
I wrote a code that would attempt loading the url via an HttpWebRequest. Due to some suggestions I received, when the HttpWebRequest fails I then set the AllowAutoRedirect to false and basically manually loop through the iterations of redirect until I find out what ultimately fails. Here's the code I'm using, please forgive the gratuitous Console.Write/Writeline calls:
Uri url = new Uri(val);
bool result = true;
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(url);
string source = String.Empty;
Uri responseURI;
try
{
using (System.Net.WebResponse webResponse = req.GetResponse())
{
using (HttpWebResponse httpWebResponse = webResponse as HttpWebResponse)
{
responseURI = httpWebResponse.ResponseUri;
StreamReader reader;
if (httpWebResponse.ContentEncoding.ToLower().Contains("gzip"))
{
reader = new StreamReader(new GZipStream(httpWebResponse.GetResponseStream(), CompressionMode.Decompress));
}
else if (httpWebResponse.ContentEncoding.ToLower().Contains("deflate"))
{
reader = new StreamReader(new DeflateStream(httpWebResponse.GetResponseStream(), CompressionMode.Decompress));
}
else
{
reader = new StreamReader(httpWebResponse.GetResponseStream());
}
source = reader.ReadToEnd();
reader.Close();
}
}
req.Abort();
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(source);
result = true;
}
catch (ArgumentException ae)
{
Console.WriteLine(url + "\n--\n" + ae.Message);
result = false;
}
catch (WebException we)
{
Console.WriteLine(url + "\n--\n" + we.Message);
result = false;
string urlValue = url.ToString();
try
{
bool cont = true;
int count = 0;
do
{
req = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(urlValue);
req.Headers.Add("Accept-Language", "en-us,en;q=0.5");
req.AllowAutoRedirect = false;
using (System.Net.WebResponse webResponse = req.GetResponse())
{
using (HttpWebResponse httpWebResponse = webResponse as HttpWebResponse)
{
responseURI = httpWebResponse.ResponseUri;
StreamReader reader;
if (httpWebResponse.ContentEncoding.ToLower().Contains("gzip"))
{
reader = new StreamReader(new GZipStream(httpWebResponse.GetResponseStream(), CompressionMode.Decompress));
}
else if (httpWebResponse.ContentEncoding.ToLower().Contains("deflate"))
{
reader = new StreamReader(new DeflateStream(httpWebResponse.GetResponseStream(), CompressionMode.Decompress));
}
else
{
reader = new StreamReader(httpWebResponse.GetResponseStream());
}
source = reader.ReadToEnd();
if (string.IsNullOrEmpty(source))
{
urlValue = httpWebResponse.Headers["Location"].ToString();
count++;
reader.Close();
}
else
{
cont = false;
}
}
}
} while (cont);
}
catch (UriFormatException uriEx)
{
Console.WriteLine(urlValue + "\n--\n" + uriEx.Message + "\r\n");
result = false;
}
catch (WebException innerWE)
{
Console.WriteLine(urlValue + "\n--\n" + innerWE.Message+"\r\n");
result = false;
}
}
if (result)
Console.WriteLine("testing successful");
else
Console.WriteLine("testing unsuccessful");
Since this is currently just test code I hardcode val as http://rss.nytimes.com/c/34625/f/642557/s/3d072012/sc/38/l/0Lartsbeat0Bblogs0Bnytimes0N0C20A140C0A70C30A0Csarah0Ekane0Eplay0Eamong0Eofferings0Eat0Est0Eanns0Ewarehouse0C0Dpartner0Frss0Gemc0Frss/story01.htm
the ending url that gives the UriFormatException is: http:////www-nc.nytimes.com/2014/07/30/sarah-kane-play-among-offerings-at-st-anns-warehouse/?=_php=true&_type=blogs&_php=true&_type=blogs&_php=true&_type=blogs&_php=true&_type=blogs&_php=true&_type=blogs&_php=true&_type=blogs&_php=true&_type=blogs&partner=rss&emc=rss&_r=6&
Now I'm sure if I'm missing something or if I'm doing the looping wrong, but if I take val and just put that into a browser the page loads fine, and if I take the url that causes the exception and put it in a browser I get taken to an account login for nytimes.
I have a number of these rss feed urls that are resulting in this problem. I also have a large number of these rss feed urls that have no problem loading at all. Let me know if there is any more information needed to help resolve this. Any help with this would be greatly appreciated.
Could it be that I need to have some sort of cookie capability enabled?
You need to keep track of the cookies while doing all your requests. You can use an instance of the CookieContainer class to achieve that.
At the top of your method I made the following changes:
Uri url = new Uri(val);
bool result = true;
// keep all our cookies for the duration of our calls
var cookies = new CookieContainer();
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(url);
// assign our CookieContainer to the new request
req.CookieContainer = cookies;
string source = String.Empty;
Uri responseURI;
try
{
And in the exception handler where you create a new HttpWebRequest, you do the assignment from our CookieContainer again:
do
{
req = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(urlValue);
// reuse our cookies!
req.CookieContainer = cookies;
req.Headers.Add("Accept-Language", "en-us,en;q=0.5");
req.AllowAutoRedirect = false;
using (System.Net.WebResponse webResponse = req.GetResponse())
{
This makes sure that on each successive call the already present cookies are resend again in the next request. If you leave this out, no cookies are sent and therefore the site you try to visit assumes you are a fresh/new/unseen user and gives you a kind of authentication path.
If you want to store/keep cookies beyond this method you could move the cookie instance variable to a static public property so you can use all those cookies program-wide like so:
public static class Cookies
{
static readonly CookieContainer _cookies = new CookieContainer();
public static CookieContainer All
{
get
{
return _cookies;
}
}
}
And to use it in a WebRequest:
var req = (System.Net.HttpWebRequest) WebRequest.Create(url);
req.CookieContainer = Cookies.All;

Get cookies from httpwebrequest

I'm trying to get all cookies from a website using this code
CookieContainer cookieJar = new CookieContainer();
var request = (HttpWebRequest)HttpWebRequest.Create("http://www.foetex.dk/ugenstilbud/Pages/Zmags.aspx");
request.CookieContainer = cookieJar;
var response = request.GetResponse();
foreach (Cookie c in cookieJar.GetCookies(request.RequestUri))
{
Console.WriteLine("Cookie['" + c.Name + "']: " + c.Value);
}
Console.ReadLine();
The only thing i want is to display with console.writeline, but im not getting a single of them.
//Please use this,
HttpWebRequest request = null;
request = HttpWebRequest.Create(StringURL) as HttpWebRequest;
HttpWebResponse TheRespone = (HttpWebResponse)request.GetResponse();
String setCookieHeader = TheRespone.Headers[HttpResponseHeader.SetCookie];
The following example uses the HttpCookie class and its properties to read a cookie with a specific name.
HttpCookie myCookie = new HttpCookie("MyTestCookie");
myCookie = Request.Cookies["MyTestCookie"];
// Read the cookie information and display it.
if (myCookie != null)
Response.Write("<p>"+ myCookie.Name + "<p>"+ myCookie.Value);
else
Response.Write("not found");
Retrieve the values in the HttpWebResponse.Cookies property of HttpWebResponse. In this example, the cookies are retrieved and saved to isolated storage:
private void ReadCallback(IAsyncResult asynchronousResult)
{
HttpWebRequest request = (HttpWebRequest)asynchronousResult.AsyncState;
HttpWebResponse response = (HttpWebResponse)
request.EndGetResponse(asynchronousResult);
using (IsolatedStorageFile isf =
IsolatedStorageFile.GetUserStoreForSite())
{
using (IsolatedStorageFileStream isfs = isf.OpenFile("CookieExCookies",
FileMode.OpenOrCreate, FileAccess.Write))
{
using (StreamWriter sw = new StreamWriter(isfs))
{
foreach (Cookie cookieValue in response.Cookies)
{
sw.WriteLine("Cookie: " + cookieValue.ToString());
}
sw.Close();
}
}
}
}
To get the list of cookies, you can use the below method;
private async Task<List<Cookie>> GetCookies(string url)
{
var cookieContainer = new CookieContainer();
var uri = new Uri(url);
using (var httpClientHandler = new HttpClientHandler
{
CookieContainer = cookieContainer
})
{
using (var httpClient = new HttpClient(httpClientHandler))
{
await httpClient.GetAsync(uri);
return cookieContainer.GetCookies(uri).Cast<Cookie>().ToList();
}
}
}

Categories

Resources