I made a program in VB.net a few years back which counted headers from web requests and if it matched 10 it would do something else it would continue checking.
I'm now trying to transfer the program to C# and extend the program, does C# have a similar method? I found this but I am struggling to implement it.
For reference, this is my visual basic code:
Sub CheckLink(ByVal link As String)
Try
If link.Length >= 3 Then
Dim web As New WebClient
web.Headers.Add("user-agent", " Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0")
Dim t As Byte() = WC.DownloadData("https://www.google.com")
Dim headers As WebHeaderCollection = WC.ResponseHeaders
WC.Dispose()
If headers.Count = 10 Then
Msgbox("Correct")
Else
// Do nothing
End If
End Try
End Sub
Is there a method for C#? thank you!
EDIT:
I've tried implementing it into my C# but I don't think the user agent is correct as it's not outputting. I have tested the header count on a basic plaintext website without the user agent and it outputted the correct header count but I think that websites with javascript need the user agent provided otherwise it doesn't return the HttpWebResponse. Where have I gone wrong?
static void Main(string[] args)
{
Console.WriteLine("Header count: ");
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create("http://passport.twitch.tv/usernames/sdjkf3jk");
myHttpWebRequest.UserAgent = " Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0";
HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
WebHeaderCollection myWebHeaderCollection = myHttpWebResponse.Headers;
Console.WriteLine(myWebHeaderCollection.Count); // Test write to see if outputting
}
Related
So I wrote a program to send a POST request in Python, and I've been having alot of trouble trying to do the same thing in c#. I've been searching for about ~2 hours now and just am coming up empty handed. If anyone can help it would be greatly appreciated.
url = "https://www.example.com"
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64;
rv:63.0) Gecko/20100101 Firefox/63.0"}
data = {"example":"test", "example2":"test2", "example3":"example3"}
r = requests.post(url, json=data, headers=headers)
this is quiet different than the answers given on the other questions on the stack.
i m confused why i'm getting the exception "Too many requests" when sending a GET request to the server .
the request works fine on BURP-SUIT/PostMan even without headers . and i tried to setup 10 continuous requests on Postman and i received OK status code on all of them with interval of 700 ms . even tho in c# Code i'm still getting this exception .
any help is really appreciated .
EDIT :
var req = WebRequest.Create("example.com") as HttpWebRequest;
req.Method = "GET";
req.UserAgent = "Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0";
req.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
req.GetResponse();
Add this to your request header:
Retry-After: 120
Like this on code :
request.Headers.Add("Retry-After", "120");
I have a website on a local network that I am trying to write a little client for.
I am trying to use WebClient for this purpose, however, it seems that the website somehow detects it and does not allow to continue cutting the connection, which results in WebException.
To counter this, I have tried adding headers like:
WebClient wc = new WebClient();
wc.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2";
wc.Headers.Add("Accept", "text/html,application/xhtml+xml,application/xml");
wc.Headers.Add("Accept-Encoding", "deflate, sdch, br");
wc.Headers.Add("Accept-Charset", "ISO-8859-1");
wc.Headers.Add("Accept-Language", "en-us;q=0.7,en;q=0.3");
However, the website still cut of the connection and I managed to notice that not all headers were sent, then, I have tried to override WebRequest:
public class MyWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
var castRequest = request as HttpWebRequest;
if (castRequest != null)
{
castRequest.KeepAlive = true;
castRequest.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8";
castRequest.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2";
castRequest.Headers.Add("Accept-Encoding", "deflate, sdch, br");
castRequest.Headers.Add("Accept-Charset", "ISO-8859-1");
castRequest.Headers.Add("Accept-Language", "en-US,en;q=0.8");
}
return request;
}
}
This managed to send all the headers, however, I still could not access the website.
I can access the website just fine using any browser like Firefox or Chrome from which I copied the headers or even WebBrowser control, I can also access other websites using WebClient without any issue.
Is there anything specific why I cannot access such website using WebClient?
Is there anything else to make WebClient request look more like browser for a website?
I have figured out that I was looking in the wrong direction.
It seems the website in question does not support default SecurityProtocol, therefore I had to enable TLS12:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
For some reason it has caused the same issue with another local website I was parsing, which I have solved by enabling all TLS versions:
ServicePointManager.SecurityProtocol =
SecurityProtocolType.Tls
| SecurityProtocolType.Tls11
| SecurityProtocolType.Tls12;
My code is :
string result = new WebClient().DownloadString("https://www.facebook.com/profile.php?id=123456789");
the result giving me supported browser.
body class=\"unsupportedBrowser
What i intend to do:
Download a source code from facebook for the particular page.
Problem encounter :
I get the stream from facebook, facebook block me since i access from the apps due to this is not a valid browser.
Expectation : How i can submit the browser type like chrome to cheat the facebook as this is a valid browser.
You can add headers to the webclient object. However, I prefer to go the HttpWebRequest/HttpWebResponse method of scraping since I believe it gives more options.
Headers.Add("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
Headers.Add("Accept-Encoding", "gzip, deflate");
Headers.Add("Accept-Language", "en-US,en;q=0.5");
Headers.Add("Cookie", "has_js=1");
Headers.Add("DNT", "1");
Headers.Add("Host", host);
Headers.Add("Referer", url);
Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0");
I am trying to download this image using C#
http://www.pinkice.com/data/product_image/1/13954Untitled-1.jpg
When I try to download it using a WebClient I get an exception saying the underlying connection was closed unexpectedly.
I've tried modifying the headers to simulate chrome
Headers[HttpRequestHeader.Accept] = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
Headers[HttpRequestHeader.AcceptLanguage] = "en-US,en;q=0.8";
Headers[HttpRequestHeader.CacheControl] = "max-age=0";
Headers[HttpRequestHeader.UserAgent] = "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.6 (KHTML, like Gecko) Chrome/23.0.1243.2 Safari/537.6";
This did not work. I then tried to see if it even worked with wget
wget "http://www.pinkice.com/data/product_image/1/14231Untitled-2.jpg"
Which resulted in
HTTP request sent, awaiting response... No data received. Retrying.
Can anyone figure this out?
Below code works..
using (WebClient wc = new WebClient())
{
wc.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1";
byte[] buf = wc.DownloadData("http://www.pinkice.com/data/product_image/1/13954Untitled-1.jpg");
Image bmp = Image.FromStream(new MemoryStream(buf));
}
The problem was I was using reusing the WebClient object. I think it caches something weirdly when there is a 304 HTTP Status code from the If-Modified-Since header. Moral of the story is do not try to reuse the WebClient object.