I'm rather new to sending/receiving over networks/sockets/network streams and so on.
I'm making an IRC program that can communicate with Twitch.tv. They have an API, and they have examples of all sorts of requests you would use to get different kinds of information.
https://github.com/justintv/Twitch-API/tree/master/v3_resources
One example of their requests is this:
curl -H 'Accept: application/vnd.twitchtv.v3+json' \
-X GET https://api.twitch.tv/kraken/chat/kraken_test_user
I have tried to do some research on requests, and I sort of understand some, but for the most part I could not find any resources that help make it click for me.
In the above example, what are the important parts of that request? curl? -H? Is that one big command, or is it two commands separated by the \ at the end of the first line?
Then, the biggest question, how to send requests like the one above using C#?
EDIT 1:
I also know that I will be getting responses in JSON. Is there anything built in that assists with receiving/parsing JSON?
And also using PUT to change some JSON? (some things in the API allow PUT).
For the first bit of the question, you asked what are the important parts
It has an accept header of application/vnd.twitchtv.v3+json
It is a GET request
The api url: https://api.twitch.tv/kraken/chat/kraken_test_user
This request in c# could look like the following (could because there is more than one way to do it)
private async Task<object> GetRequest(string url)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/vnd.twitchtv.v3+json"));
var response = await httpClient.GetAsync(url);
var contents = await response.Content.ReadAsStringAsync();
return contents;
}
Note that the files in the link you posted are to Mark Down files that Google describes as:
MD, or markdown document is a text file created using one of several possible dialects of the Markdown language. MD files use plain text formatting but includes inline text symbols that define how to format the text, and is designed for authoring plain text documentation that can be easily converted to HTML.
curl -H 'Accept: application/vnd.twitchtv.v3+json' \
-X GET https://api.twitch.tv/kraken/chat/kraken_test_user
http://curl.haxx.se/docs/manpage.html explains what the curl command is that then has 2 switches, H and X. where quoting the link:
-H, --header
(HTTP) Extra header to include in the request when sending HTTP to a
server. You may specify any number of extra headers. Note that if you
should add a custom header that has the same name as one of the
internal ones curl would use, your externally set header will be used
instead of the internal one. This allows you to make even trickier
stuff than curl would normally do. You should not replace internally
set headers without knowing perfectly well what you're doing. Remove
an internal header by giving a replacement without content on the
right side of the colon, as in: -H "Host:". If you send the custom
header with no-value then its header must be terminated with a
semicolon, such as -H "X-Custom-Header;" to send "X-Custom-Header:".
curl will make sure that each header you add/replace is sent with the
proper end-of-line marker, you should thus not add that as a part of
the header content: do not add newlines or carriage returns, they will
only mess things up for you.
See also the -A, --user-agent and -e, --referer options.
Starting in 7.37.0, you need --proxy-header to send custom headers
intended for a proxy.
Example:
# curl -H "X-First-Name: Joe" http://192.168.0.1/
WARNING: headers set with this option will be set in all requests -
even after redirects are followed, like when told with -L, --location.
This can lead to the header being sent to other hosts than the
original host, so sensitive headers should be used with caution
combined with following redirects.
This option can be used multiple times to add/replace/remove multiple
headers.
The "\" makes the next line be added to the first line.
-X, --request
(HTTP) Specifies a custom request method to use when communicating
with the HTTP server. The specified request method will be used
instead of the method otherwise used (which defaults to GET). Read the
HTTP 1.1 specification for details and explanations. Common additional
HTTP requests include PUT and DELETE, but related technologies like
WebDAV offers PROPFIND, COPY, MOVE and more.
Normally you don't need this option. All sorts of GET, HEAD, POST and
PUT requests are rather invoked by using dedicated command line
options.
This option only changes the actual word used in the HTTP request, it
does not alter the way curl behaves. So for example if you want to
make a proper HEAD request, using -X HEAD will not suffice. You need
to use the -I, --head option.
The method string you set with -X will be used for all requests, which
if you for example use -L, --location may cause unintended
side-effects when curl doesn't change request method according to the
HTTP 30x response codes - and similar.
(FTP) Specifies a custom FTP command to use instead of LIST when doing
file lists with FTP.
(POP3) Specifies a custom POP3 command to use instead of LIST or RETR.
(Added in 7.26.0)
(IMAP) Specifies a custom IMAP command to use instead of LIST. (Added
in 7.30.0)
(SMTP) Specifies a custom SMTP command to use instead of HELP or VRFY.
(Added in 7.34.0)
If this option is used several times, the last one will be used.
In C#, there is a WebRequest class that https://msdn.microsoft.com/en-CA/library/456dfw4f(v=vs.110).aspx has a good example of how to use to get data from a given URL.
As for handling JSON, please look into http://www.newtonsoft.com/json which is a rather common library used for parsing JSON responses. PUT would be the HTTP verb like GET or POST used to tell the server how to process a request. I'd suggest in the future be careful about posting a rather broad set of questions here as I could see this being something that a class could spend an hour covering somewhere that I doubt your intention is getting someone else to do your homework, right?
Related
I have .net core project and add stackify prefix to monitor requests, but in response prefix show only headers but not body of response. It is possible to see all response body?
On prefix site I found information:
It can capture incoming post data, it can also capture the response and the response headers and part of the response body. Right now, we limit that to only be a certain amount of characters so if it’s returning something larger, it won’t capture all of it.
It is possible to change this?
There is not a way to change this at the moment if the response body is too large it will not show up in the traces.
Stackify has an Ideas portal that you can make suggested changes to, their COO gets notified when a new request has been made and when a request has been up voted by several clients. He takes each request into good consideration and arranges them into Stackify's road map. Also you can subscribe to the ideas to keep updated on its progress.
https://ideas.stackify.com
I want to use Nancy with the default routing, as it's clean and works well, however I want an option to log all incoming requests to the console (I'm using Nancy's self-hosting module) irrespective of whether an explicit route exists. Put simply, I want to be able to capture the verb, the incoming request URI, any posted data (if it's a POST request), etc.
How do I do this? Before/After only seem to run for requests that match an existing route, and a 404 does not trigger OnError either. Also, using Get["/(.*)"] only catches GET requests and will ignore other HTTP verbs.
Use the Before/After on an Application level, not Module, for that https://github.com/NancyFx/Nancy/wiki/The-Application-Before%2C-After-and-OnError-pipelines
Similar questions have been asked about the nature of when to use POST and when to use GET in an AJAX request
Here:
What are the advantages of using a GET request over a POST request?
and here: GET vs. POST ajax requests: When and how to use either?
However, I want to make it clear that that is not exactly what I am asking. I get idempotence, sensitive data, the ability for browsers to be able to try again in the event of an error, and the ability for the browser to be able to cache query string data.
My real scenario is such that I want to prevent my users from being able to simply enter in the URL to my "Compute.cshtml" file (i.e. the file on the server that my jQuery $.ajax function posts to).
I am in a WebMatrix C#.net web-pages environment and I have tried to precede the file name with an underscore (_), but apparently an AJAX request falls under the same criteria that this underscore was designed to prevent the display of and it, of course, breaks the request.
So if I use POST I can simply use this logic:
if (!IsPost) //if this is not a post...
{
Response.Redirect("~/") //...redirect back to home page.
}
If I use GET, I suppose I can send additional data like a string containing the value "AccessGranted" and check it on the other side to see if it equals this value and redirect if not, but this could be easily duplicated through typing in the address bar (not that the data is sensitive on the other side, but...).
Anyway, I suppose I am asking if it is okay to always use POST to handle this logic or what the appropriate way to handle my situation is in regards to using GET or POST with AJAX in a WebMatrix C#.net web-pages environment.
My advice is, don't try to stop them. It's harmless.
You won't have direct links to it, so it won't really come up. (You might want your robots.txt to exclude the whole /api directory, for Google's sake).
It is data they have access to anyway (otherwise you need server-side trimming), so you can't be exposing anything dangerous or sensitive.
The advantages in using GETs for GET-like requests are many, as you linked to (caching, semantics, etc)
So what's the harm in having that url be accessible via direct browser entry? They can POST directly too, if they're crafty enough, using Fiddler "compose" for example. And having the GETs be accessible via url is useful for debugging.
EDIT: See sites like http://www.robotstxt.org/orig.html for lots of details, but a robots.txt that excluded search engines from your web services directory called /api would look like this:
User-agent: *
Disallow: /api/
Similar to IsPost, you can use IsAjax to determine whether the request was initiated by the XmlHttpRequest object in most browsers.
if(!IsAjax){
Response.Redirect("~/WhatDoYouThinkYoureDoing.cshtml");
}
It checks the request to see if it has an X-Requested-With header with the value of XmlHttpRequest, or if there is an item in the Request object with the key X-Requested-With that has a value of XmlHttpRequest.
One way to detect a direct AJAX call is to check for the presence of the http_referer header. Directly typed URLs won't generate a referrer, but you still won't be able to differentiate the call from a simple anchor link.
(Just keep in mind that some browsers don't generate the header for XHR requests.)
I'm getting a 406 error when trying to use RestSharp to post a request to a third-party application. I'm new to REST, so I have to admit I didn't even know you could add headers. I tried adding these, but I'm still getting the same issue:
var client = new RestClient(myURL);
RestRequest request = new RestRequest("restAction", Method.POST);
request.AddHeader("Accept", "text/plain");
request.AddHeader("Content-Type", "text/plain");
request.AddParameter("parameter1", param1);
request.AddParameter("parameter2", param2);
var response = client.Execute(request);
From what I've read, this may be dealing with a header named "accept". Is that right?
Any idea what could be going on?
In general in HTTP, when a client makes a request to a server, it tells the server what kinds of formats it's prepared to understand (accept). This list of acceptable formats is what the Accept header is for. If the server can't respond using any of the media types in the Accept header, it will return a 406. Otherwise, it will indicate which media type it chose in the Content-Type header of the response. Putting "*/*" in the Accept header tells the server that the client can handle any response media type.
In my original comment to your question, I said that RestSharp looks like it's including "*" in the Accept header by default, but looking closer I see now that it's actually not. So, if you don't override the Accept header like you've done here, the default header value is "application/json","application/xml","text/json","text/x-json","text/javascript","text/xml", and it appears the server you're talking to doesn't speak any of these media types.
If the server you're working with doesn't speak json or xml, I don't think you can use RestSharp, unless you create your own deserializer. I'm not sure if you can do this from the public API or if you'd have to modify the source yourself and recompile it for you own needs.
Since you're still getting HTTP errors from the server, I would recommend taking RestSharp out of the equation for right now, and just speaking HTTP directly to the server until you actually get a correct response from the server. You can use a tool like Fiddler to make a HTTP requests directly. When you send the request (for now in the debugging stage), send an Accept header of "*/*" to get around the 406. Once you've figured out what media types the server can send back to you, you should change this back to being a media type you know you can read and you know the server can send.
It sounds like the main issue here is really just not knowing the protocol of the server. If there's any documentation on the service you're talking to, I would read that very carefully to figure out what media types it's prepared to respond with and how to craft the URLs that it expects.
I have a scenario which requires me to append an HTTP header to all outgoing IE-based HTTP communications on a machine. This doesn't need to work outside of IE.
I first attempted to create a simple HTTP proxy in C#, but the performance of this proxy wasn't very good, and there were issues with HTTPS communications.
My second attempt was to use FiddlerCore, which I hoped would have better performance, but was only marginally faster than what I had created myself.
Aside from writing a TCP filter driver to do this (not in my skillset), is there another option? Strictly speaking, this doesn't have to be an HTTP header. It could even be something I tack on to the user agent string.
I was thinking perhaps about creating a simple BHO, but I'm hoping there is an easier solution... one that I can write in C# perhaps.
How about just reading the user-agent string in your code and if it's IE then append the HTTP header?
Just use user agent string. It is documented here http://msdn.microsoft.com/en-us/library/ms537503(VS.85).aspx .
Per article following registry key can be used to add to the user agent string: SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Pre Platform\Token = Value.