send CONNECT with User-Agent C# - c#

I'm trying to automate a WebSocket service that denies connection unless you send a user agent with the CONNECT request.
I tried sending the upgrade request with HttpWebRequest and setting User-Agent using the property.
Using Fiddler to debug the request this was sent out:
CONNECT *.*.com:443 HTTP/1.1
Host: *.*.com:443
Connection: keep-alive
How do I add the User-Agent string to the CONNECT request and then upgrade to using WebSocket protocol?
My code so far:
public void Login ( Action onEnd = null ) {
var req = CreateUpgradeRequest();
var res = GetResponse(req);
}
private HttpWebRequest CreateUpgradeRequest ( ) {
HttpWebRequest request = WebRequest.Create("https://lobby35.runescape.com/") as HttpWebRequest;
request.UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36";
request.Connection = "Upgrade";
SetWebSocketHeader(request, "Key", "5LENZfSifyj/Rw1ghTvpgw==");
SetWebSocketHeader(request, "Version", "13");
SetWebSocketHeader(request, "Extensions", "permessage-deflate; client_max_window_bits");
SetWebSocketHeader(request, "Protocol", "jagex");
return request;
}

You cannot use WebRequest to create a websocket connection. You will need ClientWebSocket and use `ClientWebSocket.Options.SetRequestHeader.
Note, you may have issues adding that header: Setting "User-Agent" HTTP header in ClientWebSocket
Update: Since you cannot add that header with ClientWebSocket try with Websocket4Net.

Related

Content-Type returned by HEAD request sent using C# HttpClient is incorrect; however, Postman returns the correct Content-Type

I must get the Content-Type of an URL using .NET.
However, it's returning text/html for this URL, while Postman returns the correct Content-Type video/mp4. For several others file URLs, .NET returns the correct Content-Type, but not for this.
My code:
string uriString = "INSERT_URL_HERE";
HttpClient httpClient = new();
try
{
HttpRequestMessage request = new(HttpMethod.Head, uriString);
HttpResponseMessage response = await httpClient.SendAsync(request);
response.EnsureSuccessStatusCode();
string contentType = response.Content.Headers.ContentType.ToString();
Console.WriteLine("\nThe Content-Type of the resource is: {0}\n", contentType);
}
catch (Exception exception)
{
Console.WriteLine("\nException caught!");
Console.WriteLine("Message: {0}\n", exception.Message);
}
Postman response:
The problem was that the User-Agent header was not being sent, as #DiplomacyNotWar stated. After setting the User-Agent header value it worked!
PS: I used the same User-Agent of Google Chrome.
string browserUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36";
request.Headers.Add("User-Agent", browserUserAgent);

C# WebClient receives 403 when getting html from a site

I am trying to download the HTML from a site and parse it. I am actually interested in the OpenGraph data in the head section only. For most sites using the WebClient, HttpClient or HtmlAgilityPack works, but some domains I get 403, for example: westelm.com
I have tried setting up the Headers to be absolutely the same as they are when I use the browser, but I still get 403. Here is some code:
string url = "https://www.westelm.com/m/products/brushed-herringbone-throw-t5792/?";
var doc = new HtmlDocument();
using(WebClient client = new WebClient()) {
client.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36";
client.Headers["Accept"] = "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9";
client.Headers["Accept-Encoding"] = "gzip, deflate, br";
client.Headers["Accept-Language"] = "en-US,en;q=0.9";
doc.Load(client.OpenRead(url));
}
At this point, I am getting a 403.
Am I missing something or the site administrator is protecting the site from API requests?
How can I make this work? Is there a better way to get OpenGraph data from a site?
Thanks.
I used your question to resolve the same problem. IDK if you're already fixed this but I tell you how it worked for me
A page was giving me 403 for the same reasons. The thing is: you need to emulate a "web browser" from the code, sending a lot of headers.
I used one of yours headers I wasn't using (like Accept-Language)
I didn't use WebClient though, I used HttpClient to parse the webpage
private static async Task<string> GetHtmlResponseAsync(HttpClient httpClient, string url)
{
using var request = new HttpRequestMessage(HttpMethod.Get, new Uri(url));
request.Headers.TryAddWithoutValidation("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9");
request.Headers.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate, br");
request.Headers.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36");
request.Headers.TryAddWithoutValidation("Accept-Charset", "UTF-8");
request.Headers.TryAddWithoutValidation("Accept-Language", "en-US,en;q=0.9");
using var response = await httpClient.SendAsync(request).ConfigureAwait(false);
if (response == null)
return string.Empty;
using var responseStream = await response.Content.ReadAsStreamAsync().ConfigureAwait(false);
using var decompressedStream = new GZipStream(responseStream, CompressionMode.Decompress);
using var streamReader = new StreamReader(decompressedStream);
return await streamReader.ReadToEndAsync().ConfigureAwait(false);
}
If it helps you, I'm glad. If not, I will leave this answer here to help someone else in the future!

C# HttpClient 504 Gateway Timeout when not using Fiddler proxy

I have this simple code to instantiate an HttpClient object, and send a few webrequests, but am running into a few problems that I will explain shortly:
var client = WebHelper.CreateGzipHttpClient(new WebProxy("127.0.0.1", 8888));
client.DefaultRequestHeaders.Add("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36");
client.DefaultRequestHeaders.Add("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3");
client.DefaultRequestHeaders.Add("Accept-Language", "en-US,en;q=0.9");
client.DefaultRequestHeaders.Add("Accept-Encoding", "gzip, deflate, br");
client.DefaultRequestHeaders.Add("Sec-Fetch-Mode", "navigate");
client.DefaultRequestHeaders.Add("Sec-Fetch-Site", "none");
client.DefaultRequestHeaders.Add("Sec-Fetch-User", "?1");
client.DefaultRequestHeaders.Add("Upgrade-Insecure-Requests", "1");
await client.GetAsync("https://www.example.com");
await client.GetAsync("https://www.bestbuy.com");
await client.GetAsync("https://www.costco.com");
If I remove the request to example.com, the subsequent requests fail (504 Gateway Timeout on bestbuy.com). Doesn't make any sense to me, so was wondering if someone on SO could enlighten me as to why that is.
Furthermore, if I remove the WebProxy from the HttpClient, only the request to example.com will succeed, and the other 2 will fail.
What is going on and how can I fix it?
public static HttpClient CreateGzipHttpClient(WebProxy proxy = null)
{
HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate,
Proxy = proxy
};
return new HttpClient(handler);
}
Fixed by removing the Fiddler related SSL certificates within Internet Explorer's Internet Options. These weren't being removed even after an uninstall.

How can I ignore SSL checks while using Webclient in C#

I am trying to get the contents of this URL as a string.
https://noembed.com/embed?url=https://www.youtube.com/watch?v=1FLhOGOg2Qg
This is the code I am using:
var html_content = "";
using (var client = new WebClient())
{
client.Headers.Add("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36");
html_content += client.DownloadString("https://noembed.com/embed?url=https://www.youtube.com/watch?v=1FLhOGOg2Qg");
}
Console.WriteLine(html_content);
Console.ReadLine();
And this is the error I get:
System.Net.WebException was unhandled
HResult=-2146233079
Message=The request was aborted: Could not create SSL/TLS secure channel.
Source=System
I am using this on a WPF application and I am OK with ignoring SSL here. I have already tried other answers for ignoring SSL but none worked. It works with other urls, eg https://www.youtube.com/watch?v=1FLhOGOg2Qg but not with the noembed.com URL.
Add ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
this worked for me :
var html_content = "";
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
using (var client = new WebClient())
{
client.Headers.Add("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36");
html_content += client.DownloadString("https://noembed.com/embed?url=https://www.youtube.com/watch?v=1FLhOGOg2Qg");
}
Console.WriteLine(html_content);
Console.ReadLine();
output i got :
{"author_url":"https://www.youtube.com/user/nogoodflix","url":"https://www.youtube.com/watch?v=1FLhOGOg2Qg","provider_url":"https://www.youtube.com/","title":"ONE FOR THE MONEY Trailer 2011 Official [HD] Katherine Heigl","author_name":"Streaming Clips","type":"video","height":270,"thumbnail_height":360,"thumbnail_width":480,"provider_name":"YouTube","html":"\nhttps://www.youtube.com/embed/1FLhOGOg2Qg?feature=oembed\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\">\n","thumbnail_url":"https://i.ytimg.com/vi/1FLhOGOg2Qg/hqdefault.jpg","version":"1.0","width":480}

fake http post request - viewstate

I am trying to fake a post request to a site programmed with c#.
I used WireShark to sniff the communication between my computer and the server.
I noticed that the client send viewstate data (encoded in Base64) and I would like to know how to fake it in my request.
my post code
public static void sendPostRequest(string responseUri,CookieCollection responseCookies)
{
HttpWebRequest mPostRequest =
(HttpWebRequest)WebRequest.Create("http://tickets.cinema-city.co.il/webtixsnetglilot/SelectSeatPage2.aspx?dtticks=" + responseUri + "&hideBackButton=1");
mPostRequest.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36";
mPostRequest.KeepAlive = false;
mPostRequest.Method = "Post";
mPostRequest.ContentType = "application/x-www-form-urlencoded";
CookieContainer mCookies= new CookieContainer();
foreach (Cookie cookie in responseCookies)
{
mCookies.Add(cookie);
}
mPostRequest.CookieContainer = mCookies;
HttpWebResponse myHttpWebResponse2 = (HttpWebResponse)mPostRequest.GetResponse();
}
If you can "fake" signed/encrypted data you don't really need to deal with fake posts - just steal all SSL traffic :).
View state comes in original response for the page encrypted - so you simply need to parse original response (use Html Agility Pack) and send that view state back in post request.

Categories

Resources