Github Last Commit? - c#

So iv been looking around trying to find out how i can go to a specified github page and get the last commit value and bind this into a value in my application, nothing seems to make sense and there aren't many if any good examples to base anything on. As well as noone seeming to want to share their knowledge on this topic.
Im trying to get the last commit value only from a github page, and use that as a value in my application, can someone give me an example of how to do this? I am using C# with a WPF project type.

If you want to clone the repository locally and inspect it, you could use GitSharp libgit2sharp. If that is not an option for you then you can use the github API. The url you are after is:
https://api.github.com/repos/<repo_path>/commits
e.g. https://api.github.com/repos/NancyFx/Nancy/commits
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("User-Agent",
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)");
using (var response = client.GetAsync("https://api.github.com/repos/NancyFx/Nancy/commits").Result)
{
var json = response.Content.ReadAsStringAsync().Result;
dynamic commits = JArray.Parse(json);
string lastCommit = commits[0].commit.message;
}
}
As mentioned in comments, this will couple your implementation to github, so be sure that your app doesn't need to work with other git hosts in the future if you choose the 2nd option.

Related

System.Security.Claims.ClaimsPrincipal.Current (and HttpContext.Current.User) has different claims with the exact same request depending on caller

This is something I have never seen before. When executing an identical request to ASP.NET Web API 2 from two different applications I get a different response. I narrowed it down to ClaimsPrincipal.Current being the culprit.
var principal = ClaimsPrincipal.Current;
if (principal == null) return false;
if (!principal.Identity.IsAuthenticated) return false;
According to MSDN ClaimsPrincipal.Current just calls Thread.CurrentPrincipal by default but I still do not understand how this can happen.
https://learn.microsoft.com/en-us/aspnet/core/migration/claimsprincipal-current?view=aspnetcore-2.1
I then tried to use System.Web.HttpContext.Current.User instead but this gives the same result.
How can two identical HTTP Requests from the same machine generate different response in this case? The requests can be sent over and over again with the same result. I can even stop the application and IIS Express, start it again and the result is the same. What is happening here? It must be some sort of session that is stored on the server but I don't get why the values differ with an identical requests. There is nothing misspelled in the request itself, I can copy the request generated by Postman and it works with BURP, if it is sent via Postman it fails. I don't think it is Postman specific either. I used the command Copy as PowerShell from Chrome Developer Tools -> Network tab from a working request and I got the same result with Invoke-WebRequest.
IIS uses Anonymous Authentication. The application uses IAppBuilder - app.UseCookieAuthentication with AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie and a custom CookieName.
With Burp the request gives principal.Identity.IsAuthenticated = true
With Postman the exact same request gives principal.Identity.IsAuthenticated = false. Postman Gui does a line break on - but the copied value works in Burp so nothing wrong there.
Update:
Postman cookie value in UTF-8:
XXX=_1gQcJZ_zwNNS6f5OO0mD5y4pPHATpzw7uRHQZnZidNfYYec9S3MkR-d9aaxx1AilQSCK_h1-9LVS1uVM_JLJDTty5Nilsx4njjOCsrefgBOvnkt9CIzt_fGu0kzgsi_VbrCSO-txXtLhrOBT61bFskQd0i2yF_xrnqdOoW6yzKmUPrdomxiABMsC-NYw5aSGD9d81ht-oreUGqJKoDQ7EJ0BzUc-Y6BDqrJv5TrIfdgwgOsk2cFN9gfrlN9DQQQpRAAEv5mgiXDmMpUpNvsP-k-CFu69sl1ZlTXOLR5ECSrq7woeIhea6-L9g1mwpslqAV_saLtv0DcbR525gR0tSrpEIuHLwj_TSqTQ1IPHqfcqSP-RzP2jGoz85y6W2glFkfFxAXJBMTjoz4U1fvjURL5qMEuC2IpQZqKGoSbp8xICFA01yY1zzHKxXnKL8MIqDNAe9urQn2W-gmwje9bzFAkft3eYYjctrCrGMRocgQ; __RequestVerificationToken=HOA5v8aiHqUhzZP3fkKMUyi336D7JydqWMSWI-VThQgMrVRZEllKglaGaLOUP0z49ZEuJsrEaYbrLaLCxMgAwxJtfSJhGvsRaB6e3tlMPjc1
BURP cookie value in UTF-8:
XXX=_1gQcJZ_zwNNS6f5OO0mD5y4pPHATpzw7uRHQZnZidNfYYec9S3MkR-d9aaxx1AilQSCK_h1-9LVS1uVM_JLJDTty5Nilsx4njjOCsrefgBOvnkt9CIzt_fGu0kzgsi_VbrCSO-txXtLhrOBT61bFskQd0i2yF_xrnqdOoW6yzKmUPrdomxiABMsC-NYw5aSGD9d81ht-oreUGqJKoDQ7EJ0BzUc-Y6BDqrJv5TrIfdgwgOsk2cFN9gfrlN9DQQQpRAAEv5mgiXDmMpUpNvsP-k-CFu69sl1ZlTXOLR5ECSrq7woeIhea6-L9g1mwpslqAV_saLtv0DcbR525gR0tSrpEIuHLwj_TSqTQ1IPHqfcqSP-RzP2jGoz85y6W2glFkfFxAXJBMTjoz4U1fvjURL5qMEuC2IpQZqKGoSbp8xICFA01yY1zzHKxXnKL8MIqDNAe9urQn2W-gmwje9bzFAkft3eYYjctrCrGMRocgQ; __RequestVerificationToken=HOA5v8aiHqUhzZP3fkKMUyi336D7JydqWMSWI-VThQgMrVRZEllKglaGaLOUP0z49ZEuJsrEaYbrLaLCxMgAwxJtfSJhGvsRaB6e3tlMPjc1
Update 2:
From the command Copy as PowerShell from Chrome Developer Tools -> Network tab.
Invoke-WebRequest -Uri "https://localhost:44349/api/crud/customer" -Headers #{"path"="/api/crud/custo
mer"; "pragma"="no-cache"; "cookie"="__RequestVerificationToken=3gvrynl8SRhi5CBG-umg5eGii3yUOrHJAQQ7jMXhN_hOk0EGS2XdIDIS
afhbBZuS3JCCJdP6V60K_crzcQF71aw2totf9CUTPheHBmTNBRM1; io=iki1JghnuzWahlUBAAAJ; XXX=SUdlUpzYNbXJbhPxj4KY6-GC31hHyyPN_IZ88
zsXHXIpqzro6t_C5-m8BC_s2xev5SINoI-0316o7ITb6dsRA5b5oYJX2MXIWD2iaMWGADqAZeLDLoeQPHo6B6a8dQ-j2YkI17I4cjQ7SQKBiUCwN3DIZckY8
HHnWqF6LGVr79nWG3R1pqI62S3UKgEXOjhFTpEA3fD3clPti4ShG88PWnxa5ypGGDjUolcqjkusylpLAWZ3Jc8K4y-K_WnA-3EX_nNyCHp3Tk8omXHq1LgvQ
J3EsqdNvELL2KcwvUCn3ni7ktSt0Vzl6G7vL3AfZhDQb41bn90l4haR9UGvLOqSkZ_cu5IiHzvsFrps6QJ3HJ8d-Dcb4A2soVjnozh7SsZxnz-HppwhV2UaW
ANvi6MsD4kwvBreJrO9nLMOBRBXhzEInoL0baqkn_nhEtxqAndZHiHcbuoPfz8xGmgV-ilTxZRAnJ8ZAwD3yHREgJsodVg"; "accept-encoding"="gzip
, deflate, br"; "accept-language"="en-US,en;q=0.9,sv-SE;q=0.8,sv;q=0.7"; "user-agent"="Mozilla/5.0 (Windows NT 10.0; Win
64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"; "accept"="*/*"; "cache-control"="no
-cache"; "authority"="localhost:44349"; "referer"="https://localhost:44349/"; "scheme"="https"; "xsrf-token"="_GHoZagVRo
FBIAyoMmT7UEZk44wfKsGlscub-bvoeRMTpysPS_d2uccvyyvPdWDf7srVfmNqM4JN1firyN-Q35UN5DCMew0eq6OV9M_4--i_klYEJcXYSodFi_wAymDVlQ
CPLroCvDNkwuhdoZvyug2"; "method"="GET"}
Huge thanks to #CodeCaster. Sometimes you feel like a n00b all over. Looked at System.Web.HttpContext.Current.Request.Cookies and indeed these were empty. Hovered over the Cookie header in Postman and then I saw the value Restricted Header (use Postman Interceptor). What really got me here was that Invoke-WebRequest in Powershell got the same error code.
Upgraded to the Postman native app instead of using the Chrome Application and then everything worked.

C# WPF WebClient.DownloadString() not returning anything

I started off with the simple code below in order to grab the html from webpages into a string to later process. For some sites like Digikey it works but for others like Mouser it doesn't.
I have tried putting headers and userAgents onto the WebClient along with converting the url to a Uri with no success. Does anybody have any other suggestions of what I could try? Or could anybody try to get the code to work and let me know how it goes?
String url = "http://www.mouser.com/ProductDetail/Vishay-Thin-Film/PCNM2512E1000BST5/?
qs=sGAEpiMZZMu61qfTUdNhG6MW4lgzyHBgo9k7HJ54G4u10PG6pMa7%252bA%3d%3d"
WebClient web = new WebClient();
String html = web.DownloadString(url);
MessageBox.Show(html);
EDIT : The link should lead here: link
EDIT : I tried the following chunk of code with no luck:
String url = "http://www.mouser.com/ProductDetail/Vishay-Thin-Film/PCNM2512E1000BST5/?
qs=sGAEpiMZZMu61qfTUdNhG6MW4lgzyHBgo9k7HJ54G4u10PG6pMa7%252bA%3d%3d"
WebClient web = new WebClient();
web.Headers[HttpRequestHeader.UserAgent] = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2";
String html = web.DownloadString(url);
MessageBox.Show(html);
Need to download Fiddler it's free (was originally developed by Microsoft) and it lets you record browser sessions. So launch it open chrome or whatever your browser is and go though the steps. Once you done you can stop it and look at every request and response and the raw data sent.
Makes it easy to spot the difference between your code and the browser.
There are also many free tools that will take your request/response data and generate the C# code for you such as Request To Code. That is not the only one, I'm not at work and I can't recall the one I use there, but there are plenty to choose from.
Hope this helps

HttpWebRequest POST data

Is it possible to make an exact identical POST with HttpWebRequest in C# as a browser would? Without a page being able to detect that it is actually no browser?
If so, were could i read up more on that?
Download and become familiar with a tool like Fiddler. It allows you to inspect web requests made from applications, like a normal browser, and see exactly what is being sent. You can then emulate the data being sent with a request created in C#, providing values for headers, cookies, etc.
I think this is doable.
Browser detection is done based on a header in the request. All you need to do is set that header. In HttpWebRequest we dont need to set the headers collection but rather the .UserAgent property.
Eg:
.UserAgent = "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)";
There is quite a lot to user agents. Check this link for the complete list of User-Agents
Useful Links:
How to create a simple proxy in C#?
Is WebRequest The Right C# Tool For Interacting With Websites?
http://codehelp.smartdev.eu/2009/05/08/improve-webclient-by-adding-useragent-and-cookies-to-your-requests/

Download js generated html with C#

There is a reports website which content I want to parse in C#. I tried downloading the html with WebClient but then I don't get the complete source since most of it is generated via js when I visit the website.
I tried using WebBrowser but could't get it to work in a console app, even after using Application.Run() and SetApartmentState(ApartmentState.STA).
Is there another way to access this generated html? I also took a look into mshtml but couldn't figure it out.
Thanks
The Javascript is executed by the browser. If your console app gets the JS, then it is working as expected, and what you really need is for your console app to execute the JS code that was downloaded.
You can use a headless browser - XBrowser may server.
If not, try HtmlUnit as described in this blog post.
Just a comment here. There shouldn't be any difference between performing an HTTP request with some C# code and the request generated by a browser. If the target web page is getting confused and not generating the correct markup because it can't make heads or tails of from the type of browser it thinks it's serving then maybe all you have to do is set the user agent like so:
((HttpWebRequest)myWebClientRequest).UserAgent = "<a valid user agent>";
For example, my current user agent is:
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0.1) Gecko/20100101 Firefox/9.0.1
Maybe once you do that the page will work correctly. There may be other factors at work here, such as the referrer and so on, but I would try this first and see if it works.
Your best bet is to abandon the console app route and build a Windows Forms application. In that case the WebBrowser will work without any work needed.

WebClient forbids opening wikipedia page?

Here's the code I'm trying to run:
var wc = new WebClient();
var stream = wc.OpenRead(
"http://en.wikipedia.org/wiki/List_of_communities_in_New_Brunswick");
But I keep getting a 403 forbidden error. Don't understand why. It worked fine for other pages. I can open the page fine in my browser. How can I fix this?
I wouldn't normally use OpenRead(), try DownloadData() or DownloadString() instead.
Also it might be that wikipedia is deliberately blocking your request because you have not provided a user agent string:
WebClient client = new WebClient();
client.Headers.Add("user-agent",
"Mozilla/5.0 (Windows; Windows NT 5.1; rv:1.9.2.4) Gecko/20100611 Firefox/3.6.4");
I use WebClient quite often, and learned quite quickly that websites can and will block your request if you don't provide a user agent string that matches a known web browser. Also, if you make up your own user agent string (eg "my super cool web scraper") you will also be blocked.
[Edit]
I changed my example user agent string to that of a modern version of Firefox. The original example I gave was the user agent string for IE6 which is not a good idea. Why? Some websites may perform filtering based on IE6 and send anyone with that browser a message or to a different page that says "Please update your browser" - this means you will not get the content you wanted to get.

Categories

Resources