I have a scenario which requires me to append an HTTP header to all outgoing IE-based HTTP communications on a machine. This doesn't need to work outside of IE.
I first attempted to create a simple HTTP proxy in C#, but the performance of this proxy wasn't very good, and there were issues with HTTPS communications.
My second attempt was to use FiddlerCore, which I hoped would have better performance, but was only marginally faster than what I had created myself.
Aside from writing a TCP filter driver to do this (not in my skillset), is there another option? Strictly speaking, this doesn't have to be an HTTP header. It could even be something I tack on to the user agent string.
I was thinking perhaps about creating a simple BHO, but I'm hoping there is an easier solution... one that I can write in C# perhaps.
How about just reading the user-agent string in your code and if it's IE then append the HTTP header?
Just use user agent string. It is documented here http://msdn.microsoft.com/en-us/library/ms537503(VS.85).aspx .
Per article following registry key can be used to add to the user agent string: SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Pre Platform\Token = Value.
Related
I'm getting the error "HTTP Error 414. The request URL is too long." From the following article, I understand that this is due to a very long query string:
http://www.mytecbits.com/microsoft/iis/query-string-too-long
In web.config, I have maxQueryStringLength="2097151". Is this the maximum value?
In order to solve this problem, should I set maxUrl in web.config? If so, what's the maximum value supported?
What should I do to fix this error?
This error is actually thrown from http.sys, not from IIS. The error gets thrown before the request is passed along to IIS in the request-handling pipeline.
To verify this, you can check the Server header value in the HTTP response headers, as per https://stackoverflow.com/a/32022511/12484.
To get https.sys to accept longer request URLs without throwing the HTTP 414 error, in the Windows Registry on the server PC, at Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters, create a DWORD-type value with name MaxFieldLength and value sufficiently large, e.g. 65535.
Reference: Http.sys registry settings for Windows
If you decide to make this change, then obviously it’ll need to be made in all environments (including all production server(s)) -- not just on your local dev PC.
Also, whatever script and/or documentation your team uses to set up new server instances will need to be updated to include this registry setting, so that your team doesn’t forget to apply this setting 18 months from now when setting up a new production server.
Finally, be aware making this change could have adverse security consequences for all applications running on your server, as a large HTTP request submitted by an attacker won’t be rejected early in the pipeline as it would normally.
As an alternative to making this change to bypass the http.sys security, consider changing the request to accept HTTP POST instead of HTTP GET, and put the parameters into the POST request body instead of into a long URL. For more discussion on this, see question Design RESTful GET API with a long list of query parameters.
As described in this answer -> What is the maximum length of a URL in different browsers?
The allowed length of a url depends on a combination of browser and server. Hence it's hard to say exactly how long the url can be. The answer recommends to stay below 2000 char in the url. I do not know why your querystring is so long. Can you shorten it? It's hard to give you any recommendations without knowing more about the solution and your query string.
Generally, Url has its own limits in length and if you set this value you may solve the problem for a while, but bear in mind that for a long url situations, best practice is working with forms. To be specific, it is better to use POST actions instead of Get.
just to complement, if you try with massive parameters, using Request ajax and receive de 414 ERROR. change the dataType property to JSON then submit as POST type.
this resolved my problem.
Similar questions have been asked about the nature of when to use POST and when to use GET in an AJAX request
Here:
What are the advantages of using a GET request over a POST request?
and here: GET vs. POST ajax requests: When and how to use either?
However, I want to make it clear that that is not exactly what I am asking. I get idempotence, sensitive data, the ability for browsers to be able to try again in the event of an error, and the ability for the browser to be able to cache query string data.
My real scenario is such that I want to prevent my users from being able to simply enter in the URL to my "Compute.cshtml" file (i.e. the file on the server that my jQuery $.ajax function posts to).
I am in a WebMatrix C#.net web-pages environment and I have tried to precede the file name with an underscore (_), but apparently an AJAX request falls under the same criteria that this underscore was designed to prevent the display of and it, of course, breaks the request.
So if I use POST I can simply use this logic:
if (!IsPost) //if this is not a post...
{
Response.Redirect("~/") //...redirect back to home page.
}
If I use GET, I suppose I can send additional data like a string containing the value "AccessGranted" and check it on the other side to see if it equals this value and redirect if not, but this could be easily duplicated through typing in the address bar (not that the data is sensitive on the other side, but...).
Anyway, I suppose I am asking if it is okay to always use POST to handle this logic or what the appropriate way to handle my situation is in regards to using GET or POST with AJAX in a WebMatrix C#.net web-pages environment.
My advice is, don't try to stop them. It's harmless.
You won't have direct links to it, so it won't really come up. (You might want your robots.txt to exclude the whole /api directory, for Google's sake).
It is data they have access to anyway (otherwise you need server-side trimming), so you can't be exposing anything dangerous or sensitive.
The advantages in using GETs for GET-like requests are many, as you linked to (caching, semantics, etc)
So what's the harm in having that url be accessible via direct browser entry? They can POST directly too, if they're crafty enough, using Fiddler "compose" for example. And having the GETs be accessible via url is useful for debugging.
EDIT: See sites like http://www.robotstxt.org/orig.html for lots of details, but a robots.txt that excluded search engines from your web services directory called /api would look like this:
User-agent: *
Disallow: /api/
Similar to IsPost, you can use IsAjax to determine whether the request was initiated by the XmlHttpRequest object in most browsers.
if(!IsAjax){
Response.Redirect("~/WhatDoYouThinkYoureDoing.cshtml");
}
It checks the request to see if it has an X-Requested-With header with the value of XmlHttpRequest, or if there is an item in the Request object with the key X-Requested-With that has a value of XmlHttpRequest.
One way to detect a direct AJAX call is to check for the presence of the http_referer header. Directly typed URLs won't generate a referrer, but you still won't be able to differentiate the call from a simple anchor link.
(Just keep in mind that some browsers don't generate the header for XHR requests.)
This is how I have currently managed to consume a particular Microsoft web service. Notice that it is located on an HTTPS server and that it requires a username, a password, and a .cer file to be installed in the operating system's "root certificate authorities".
WSHttpBinding binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.TransportWithMessageCredential;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
binding.Security.Message.NegotiateServiceCredential = true;
binding.Security.Message.AlgorithmSuite
= System.ServiceModel.Security.SecurityAlgorithmSuite.Default;
binding.Security.Message.EstablishSecurityContext = true;
EndpointAddress endpoint = new EndpointAddress("https://address.of.service");
//"GreatClient" was created for me automatically by running
//"svcutil.exe https://address.of.service?wsdl"
GreatClient client = new GreatClient(binding, endpoint);
//Username and password for the authentication. Notice that I have also installed
//the required .cer certificate into the system's "root certificate authorities".
client.ClientCredentials.UserName.UserName = "username";
client.ClientCredentials.UserName.Password = "password";
//Now I can start using the client as I wish.
My question is this: How can I obtain all the information necessary so that I can consume the web service with a direct POST to https://address.of.service, and how do I actually perform the POST with C#? I only want to use POST, where I can supply raw XML data using POST directly to https://address.of.service and get back the result as raw XML data. The question is, what is that raw XML data and how exactly should I send it using POST?
(The purpose of this question: The reason I ask is that I wish to consume this service using something other than C# and .NET (such as Ruby, or Cocoa on Mac OS X). I have no way of knowing how on earth to do that, since I don't have any easy-to-use "svcutil.exe" on other platforms to generate the required code for me. This is why I figured that just being able to consume the service using regular POST would allow me to more easily to consume the service on other platforms.)
What you are attempting to do sounds painful to do now and painful to maintain going forwards if anything changes in the server. It's really re-inventing the wheel.
If you haven't considered it already, I would:
(a) Research whether you can use the metadata you have for the service and use a proxy generator native to your target plaform. There aren't many platforms that don't have at least some tooling that might get you part of the way if not all of it. Perhaps repost a question targetting Ruby folk asking what frameworks exist to consume an HTTPS service given it's WSDL?
(b) Failing that, if your scenario allows it I would consider using a proxy written in C# that acts as a facade for the service which translates it into something easier to consume (for example, you might use something like ASP.NET MVC WebAPI which is flexible and can easily serve up standards compliant responses over which you can maintain total control).
I suspect one of these may prove easier and more valuable than the road you are on at the moment.
I had to go through something similar when porting .NET WCF code to other platforms. The easiest approach I found was to enable message logging on the WCF client. This can be configured to save both envelope and body and once everything is working on the .NET side of the house, you can use the message log to have "known-good" XML request/response to port to other platforms.
I found this approach to be more elegant since I didn't have to add an additional behavior to log messages, and it can be easily enabled/disabled/tweaked in the config. The Service Trace Viewer Tool that ships with Visual Studio is also handy for reviewing the log files.
I think when you say that the service should be consumed from other platforms, which do not have proxy class generation logic, you can go with REST services. This will allow you to create input as simple string concatenation instead of complex XML. Though its applicability depends on the situation.
Check this discussion : http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/6907d765-7d4c-48e8-9e29-3ac5b4b9c405/
As far as the certificate is concerned, refer http://msdn.microsoft.com/en-us/library/ms733791.aspx on how to configure it.
I know this is not a very precise answer, but you will be the best person to evaluate above procedure, hence posted. Hope it helps.
What I'll do:
1- Create a small c# app that can post on this webservice (using svcutil). And modify it to show the XML send/received. To view the xml there are several ways: logging, wireshark etc. To add it directly to the small app there is another question here that give a good answer.
2- Once you know what you have to send, you can do it in c# like this:
// implement GetXmlString() to return the XML to post
string xml = GetXmlString();
// create the url
string url = new UriBuilder("http","address.of.service",80).ToString();
// create a client object
using(System.Net.WebClient client = new System.Net.WebClient()) {
// performs an HTTP POST
client.UploadString(url, xml);
}
I'm not a .NET programmer but I've had to interoperate with a few .NET services and have lots of SOAP/WSDL experience. Sounds like you've captured the XML for your service. The other problem you'll face is authentication. OOTB, .NET web services use NTLM for authentication. Open-source language support for NTLMv2 can be hit and miss (although a quick google search pulled up a few possibilities for ruby), and using NTLM auth over HTTP may be something that you have to wire together yourself. To answer a question above: where are the auth creds? If the service is using NTLM over the wire, authentication is happening at some layer below HTTP. If the service is using NTLM to authenticate HTTP, your NTLM creds are in the HTTP Authorization header. You should be able to tell with wireshark where they are. You'll also probably need a SOAPAction header; this can also be sniffed with wireshark. For the C# client, I'm sure there are docs explaining how to add headers to your request.
I'm stuck on a clients host that has medium trust setup which blocks cross domain requests and need data from a 3rd party domain. I now have the option to use JSONP.
I've used JSONP from the client with jQuery to get around the browsers cross domain security and I've used HttpWebRequest in ASP.Net 3.5.
Is it possible to use JSON on the server and if so how?
I don't think it is, but worth asking seeing as I already have this app written server side....
Thanks,
Denis
The easy way might be just to proxy the JSONP request through your server. If that isn't an option (because the data must be processed in some way on the server) you can manually strip off the function-call from the return and then JSON-parse the rest
So if the JSONP call returns:
F001( { "moose" : "sister" } )
First, erase everything up to the first parenthesis and after the last so you have
{ "moose" : "sister" }
And parse that into whatever you need.
Is there a way to add a custom prefix/URI that is not http or https? The HTTPListener.Prefixes.Add method only accepts http:// and https:// prefixes.
I just don't want to recreate the functionality of this class if I don't have to.
What did you have in mind? Mainly, I doubt it; besides, it will still only handle http[s], so why confuse things with a different scheme name? You can listen on a different port by adding it to the prefix list (eg "http://127.0.0.1:90/"), though. If a client connects on that port using the correct protocol (http vs https) then it would probably work - you'd just have a lot of work to do at the client to tell it how to handle that scheme.
I'm not sure I see a point, to be honest...