In the notes for Step 1 in the "How To: Prevent Cross-Site Scripting in ASP.NET" it is stated that you should "not rely on ASP.NET request validation. Treat it as an extra precautionary measure in addition to your own input validation."
Why isn't it enough?
For one thing, hackers are always coming up with new attacks and new ways of inserting XSS. ASP.NET's RequestValidation only gets updated when a new version of ASP.NET gets released, so if someone comes up with a new attack the day after an ASP.NET release RequestValidation won't catch it.
That (I believe) is one of the reasons why the AntiXSS project appeared, so it can have a faster release cycle.
Just two hints:
Your application might output not only data that was entered using your ASP.NET forms. Think of web services, RSS feeds, other databases, informations extracted from user uploads etc.
Sometimes it's necessary to disable the default (effective but overly simple) request validation because you need to accept angle brackets in your forms. Think of a WYSIWYG editor.
Related
I have read (and am coming to terms with) the fact that no solution can be 100% effective against XSS attacks. It seems that the best we can hope for is to stop "most" XSS attack avenues, and probably have good recovery and/or legal plans afterwords. Lately, I've been struggling to find a good frame of reference for what should and shouldn't be an acceptable risk.
After having read this article, by Mike Brind (A very good article, btw):
http://www.mikesdotnetting.com/Article/159/WebMatrix-Protecting-Your-Web-Pages-Site
I can see that using an html sanitizer can also be very effective in lowering the avenues of XSS attacks if you need the user-input unvalidated.
However, in my case, it's kind of the opposite. I have a (very limited) CMS with a web interface. The user input (after being URL encoded) is saved to a JSON file, which is then picked up (decoded) on the view-able page. My main way for stopping XSS attacks here is that you would have to be one of few registered members in order to change content at all. By logging registered users, IP addresses, and timestamps, I feel that this threat is mostly mitigated, however, I would like to use a try/catch statement that would catch the YSOD produced by asp.net's default request validator in addition to the previously mentioned methods.
My question is: How much can I trust this validator? I know it will detect tags (this partial CMS is NOT set up to accept any tags, logistically speaking, so I am fine with an error being thrown if ANY tag is detected). But what else (if anything) does this inborn validator detect?
I know that XSS can be implemented without ever having touched an angle bracket (or a full tag, at all, for that matter), as html sources can be saved, edited, and subsequently ran from the client computer after having simply added an extra "onload='BS XSS ATTACK'" to some random tag.
Just curious how much this validator can be trusted if a person does want to use it as part of their anti-XSS plans (obviously with a try/catch, so the users don't see the YSOD). Is this validator pretty decent but not perfect, or is this just a "best guess" that anyone with enough knowledge to know XSS, at all, would have enough knowledge that this validation wouldn't really matter?
-----------------------EDIT-------------------------------
At this site...: http://msdn.microsoft.com/en-us/library/hh882339(v=vs.100).aspx
...I found this example for web-pages.
var userComment = Request.Form["userInput"]; // Validated, throws error if input includes markup
Request.Unvalidated("userInput"); // Validation bypassed
Request.Unvalidated().Form["userInput"]; // Validation bypassed
Request.QueryString["userPreference"]; // Validated
Request.Unvalidated().QueryString["userPreference"]; // Validation bypassed;
Per the comment: "//Validated, throws error if input includes markup" I take it that the validator throws an error if the string contains anything that is considered markup. Now the question (for me) really becomes: What is considered markup? Through testing I have found that a single angle bracket won't throw an error, but if anything (that I have tested so far) comes after that angle bracket, such as
"<l"
it seems to error. I am sure it does more checking than that, however, and I would love to see what does and does not qualify as markup in the eyes of the request validator.
I believe the ASP.NET request validation is fairly trustworthy but you should not rely on it alone. For some projects I leave it enabled to provide an added layer of security. In general it is preferable to use a widely tested/utilized solution than to craft one yourself. If the "YSOD" (or custom error page) becomes an issue with my clients, I usually just disable the .NET request validation feature for the page.
Once doing so, I carefully ensure that my input is sanitized but more importantly that my output is encoded. So anywhere where I push user-entered (or web service, etc. -- anything that comes from a third party) content to the user it gets wrapped in Server.HtmlEncode(). This approach has worked pretty well for a number of years now.
The link you provided to Microsoft's documentation is quite good. To answer your question about what is considered markup (or what should be considered markup) get on your hacker hat and check out the OWASP XSS Evasion Cheat Sheet.
https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet#HTML_entities
In our application we have a class called GlobalPage.cs this page inherits from Page we have Page load overrides and other there. It was called to our attention we need to add some security to our pages that contain textboxes (some are asp textboxes and others are html inputs). What we want to do is to be able to check all those fields globally from one place and not do it individually on each page. How can we achieve thi, possibly using the GlobalPage.cs ?
I would appreciate any input or help on what can be done.
There is already some built in protection from malicious input. Make sure you did not turn it off.
<pages validateRequest="true"...>
Incoming values by themselves are not a problem for XSS. It is rendering of non-escaped values that causes problems. So while sanitizing input is somewhat useful there is no replacement for correct encoding of output.
Consider reading existing information on XSS protection in ASP.Net like How To: Protect From Injection Attacks in ASP.NET (2.0, note that there are changes in 4.0 - ASP.NET Request Validation).
I would like to ask some suggestions from the more experienced people out there.
I have to filter the inputs the user wherein the they might try to input values like
<script type="text/javascript">alert(12);</script>
on the textbox. I would like to ask if do you have any recommendations for good practices regarding this issue?
Recently we encountered a problem actually on one of our sharepoint projects. We tried to input a script on the textbox and boom the page crashes... I mean trapping it can be easy I think because we know that it is one of the possible inputs of the user but how about the things that we don't know? There might be some other situations that we haven't considered aside from just trapping a script. Can somebody suggest a good practice regarding this matter?
Thanks in advance! :)
Microsoft actually produce an anti-cross site scripting library, though when I looked at it, it was litte more than a wrapper round various encoding functions in the .NET framework. AntiXSS library
Two of the main threats you should consider are:
Script injection
HTML tag injection
Both of these can be mitigated (to a degree) by HTML encoding user input before re=rendering it on the page.
There is also a library called AntiSamy available from the OWASP project, designed to neuter malicious input in web applications.
Jimmy answer is a good technique to manage "Input Validation & Representation" problems.
But you can filter your textbox inputs by yourself before passing it to third party API such AntiSamy and so on.
I generally use these controls:
1) minimize the length of the textbox value: not only in the client side but in the server side too (you couldn't believe me but there aren't buffer overflow attacks also in scripting)
2) Apply a Whitelist control to the characters the users write into the textbox (clientside and Serverside)
3) Use Whitelist if possibile. Blacklist are less secure than Whitelist
It is very important you do these controls into the server side part.
Sure it's very easy to forget some controls and so AntiSamy and products like this are very useful. But I advise you to implement your personal "Input Validation" API.
Securing software is not to get some third party product but it is to program in a different way.
I have tried this on sharepoint with both a single line of text and multiple lines of text, and in both cases sharepoint encodes the value. (i get no alert)
What SharePoint are you using?
I recently landed on old web application which is full of old school tricks
GET param : user info in URL parameters
Session informations
Hidden elements used for storing information
HTML/JS/CSS dumped in the page. Without proper encoding. etc.
Window.open to show popups.
XSS issues etc.
Concatenated SQL string good for blind SQL attacks.
and just many more...
to make things work. Looks like application is old over past 5-7 years (ASP.NET 1.1) and looks like the application code has failed to keep pace with better security practices.
Thankfully it looks like over period browsers and security testing tools evolved very well. Helping people/customer to report so many security issues every now and then. Keeping them happy and system secure has become pain.
Can someone please let me know in case you have faced something similar and help me get to some case study or something for how to addressed this ? test tools which are "freely" available which can be used to test web sites for security on developer environment ? What strategies should be used to deal with this situation ? How to progress.
Let me start by saying this: While there are open-source and free security scanner tools, none will be perfect. And in my experience (with PHP at least) they tend to return enough false-positives that it's barely worth running them (but that could have gotten better since I last used them). If you want to use one to try to help identify issues, by all means do so. But don't trust the output either way (from a false-positive and a false-negative perspective).
As far as how to tackle it, I would suggest a step-by-step approach. Pick one type of vulnerability, and eliminate it across the entire application. Then go to the next vulnerability type. So a potential game-plan might be (ordered by severity and ease of fixing):
Fix all SQL Injection vulnerabilities.
Go through the code, find all places where it does SQL queries, and make sure they are using prepared statements and that nothing can get in.
Fix all XSS vulnerabilities
Find all places where local information (user-submitted or otherwise) is properly sanitized and escaped (depending on the use-case).
Fix all CSRF vulnerabilites
Go through the site and make sure that all the form submissions are properly using a CSRF token system to protect them from fraudulent requests.
Fix any and all authentication and session fixation vulnerabilities
Make sure the authentication and session systems are secure from abuse. This involves making sure your cookies are clean and that any session identifiers are rotated often. And make sure you're storing passwords correctly...
Fix and information injection vulnerabilities
You state that there is user information in URLs and hidden form elements. Go through all of them and change it so that the user cannot inject values where they shouldn't be able to. If this means storing the data into a session object, do so.
Fix all information disclosure vulnerabilities
This is related to the former point, but slightly different. If you use a username in the URL, but can't do anything by changing it, then it's not an injection vulnerability, it's just a disclosure issue. Mop these up, but they are not nearly as critical (depending on what's disclosed of course).
Fix the output
Fix the encoding issues and any method that might generate invalid output. Make sure that everything is sane when it's outputted.
The important thing to note is that anything that you fix will make the application safer. If it's a live application right now, don't wait! Don't try to do everything, test and release. Pick a reasonable sized target (2 to 4 days of work max), complete the target, test and release. Then rinse and repeat. By iterating through the problems in this manor, you're making the site safer and safer as you go along. It will seem like less work to you because there is always an end in sight.
Now, if the application is severe enough, it may warrant a full rewrite. If that's the case, I'd still suggest cleaning up at least the big ticket items in the existing application prior to starting the rewrite. Clean up the SQL Injection, XSS and CSRF vulnerabilities at a bare minimum prior to doing anything else.
It's not an easy thing to do. But taken a small bite at a time, you can make significant progress while staying above water... Any little bit will help, so treat the journey as a series of steps rather than a whole. You'll be better off in the end...
Well, a google on each of the issues will help fix it, so I'll assume you're just worried about the actual risks.
This is easy to fix, just change to $_POST, which is a bit more secure, especially when used with session tokens.
This is still widely used, so I don't see an issue?
Meh, so long as that data is checked server side, and if it doesn't verify, the user has to relog or similar, this is acceptable. Of course, session is preferred, or even cookies.
Tidying up is just a long job, no issues unless passwords etc are being revealed in the JS. Css + html are almost always unencoded, so this seems fine.
I'm just not a fan of popups, and never use them, can't help here.
Well, host the script locally if possible, and sanitize any XSS that are essential, by stripping tags, URL encoding that kind of thing.
I encountered exactly this situation a couple of years ago on an e-commerce web site. It was in ASP.NET 1.1 and was absolutely appalling in terms of code quality and security practices. Further, I was told by management that a re-write was absolutely out of the question, and could not convince them to budge on that.
Over the course of 2 years I managed to get this system PCI-DSS compliant. This means we had to close all of those security loopholes one by one, and put in place practices to ensure they could not happen again.
The first problem is you have to find all the bugs and security flaws. You can use external vulnerability probing tools (I recommend QualysGuard). But manual testing is the only way to get complete coverage. Write out old fashioned test scripts to test for XSS, SQL injection, CSRF etc. and enlist the help of testers to run through these scripts. If you can automate these tests, it is well worth doing. But don't avoid covering a test case just because its too hard to automate. If you can afford security consultants, have them help you scope out the test coverage and write test cases.
This will give you:
a) a list of bugs/flaws
b) a repeatable way to test if you actually fixed each one
Then you can start refactoring and fixing each flaw.
The process I followed is:
step 1: refactor, step 2: test & release, step 3: go back to step 1.
I.e. there is a continuous cycle of fixing, testing & releasing. I would recommend you to commit to regular deployments to your production systems e.g. monthly, and squeeze as many "refactorings" into each release cycle as you can.
I found Automated security tools are almost useless due to:
a) they can provide you with a false sense of security
b) they can give you so many false positives that you are sent off on meaningless tangents instead of working on real security threats.
You really need to get to grips with the nature of each security vulnerability and understand exactly how it works. A good way to do this is to research each flaw (eg. start with OWASP top 10), and write a document that you could give to any developer on your team that would explain to them exactly what each flaw is you are trying to protect against and why. I think too often we reach for tools to help fix security flaws, thinking we can avoid the effort of really understanding each threat. Generally speaking, tools are only useful to improve productivity - you still have to take charge and run the show.
Resources:
1)Read the OWASP top 10.
2)The PCI-DSS specification is also really good stuff and helps you to think holistically about security - ie covering way more than just web applications, but databases, processes, firewall, DMZ/LAN separation etc. Even if its over the top for your requirements, its well worth browsing through.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This question is similar to Exploitable PHP Functions.
Tainted data comes from the user, or more specifically an attacker. When a tainted variable reaches a sink function, then you have a vulnerability. For instance a function that executes a sql query is a sink, and GET/POST variables are sources of taint.
What are all of the sink functions in C#? I am looking for functions that introduce a vulnerability or software weakness. I am particularly interested in Remote Code Execution vulnerabilities. Are there whole classes/libraries that contain nasty functionally that a hacker would like to influence? How do people accidentally make dangerous C# code?
Anything that uses regular expressions (particularly the RegularExpressionValidator). To see this, run a RegularExpressionValidator with the regex ^(\d+)+$ and give it 30 digits and an alpha character to validate against.
Some posts:
http://msdn.microsoft.com/en-us/magazine/ff646973.aspx
This is called a Regular Expression Denial of Service attack and it can bring a website to its knees.
On the web based side of things, C# (and more generally, ASP.NET) is commonly vulnerable to the following (items listed by OWASP Top 10 2013). I realise you were mainly interested in sink functions, of which I cover some, however you did ask how people accidentally make dangerous C# code so hopefully I've provided some insight here.
A1-Injection
SQL Injection
Generating queries by string concatenation.
var sql = "SELECT * FROM UserAccount WHERE Username = '" + username "'";
SqlCommand command = new SqlCommand(sql , connection);
SqlDataReader reader = command.ExecuteReader();
This can often be solved by parameterised queries, but if you are using an IN condition it currently isn't possible without string concatenation.
LDAP Injection
Code such as
searcher.Filter = string.Format("(sAMAccountName={1})", loginName);
can make the application vulnerable. More information here.
OS Command Injection
This code is vulnerable to command injection because the second parameter to Process.Start can have extra commands passed to it using the & character to batch multiple commands
string strCmdText= #"/C dir c:\files\" + Request.QueryString["dir"];
ProcessStartInfo info = new ProcessStartInfo("CMD.exe", strCmdText);
Process.Start(info);
e.g. foldername && ipconfig
A2-Broken Authentication and Session Management
Sign Out
The default Forms Authentication SignOut method does not update anything server side, allowing a captured auth token to be continued to be used.
Calling the SignOut method only removes the forms authentication cookie. The Web server does not store valid and expired authentication tickets for later comparison. This makes your site vulnerable to a replay attack if a malicious user obtains a valid forms authentication cookie.
Using Session State for Authentication
A session fixation vulnerability could be present if a user has used session state for authentication.
A3-Cross-Site Scripting (XSS)
Response.Write (and the shortcut <%= =>) are vulnerable by default, unless the developer has remembered to HTML encode the output. The more recent shortcut <%: HTML encodes by default, although some developers may use this to insert values into JavaScript where they can still be escaped by an attacker. Even using the modern Razor engine it is difficult to get this right:
var name = '#Html.Raw(HttpUtility.JavaScriptStringEncode(Model.Name))';
ASP.NET by default enables Request Validation, which will block any input from cookies, the query string and from POST data that could potentially be malicious (e.g. HTML tags). This appears to cope well with input coming through the particular app, but if there is content in the database that is inserted from other sources like from an app written using other technologies, then it is possible that malicious script code could still be output. Another weakness is where data is inserted within an attribute value. e.g.
<%
alt = Request.QueryString["alt"];
%>
<img src="http://example.com/foo.jpg" alt="<%=alt %>" />
This can be exploited without triggering Request Validation:
If alt is
" onload="alert('xss')
then this renders
<img src="http://example.com/foo.jpg" alt="" onload="alert('xss')" />
In old versions of .NET it was a bit of a mine-field for a developer to ensure that their output was correctly encoded using some of the default web controls.
Unfortunately, the data-binding syntax doesn’t yet contain a built-in encoding syntax; it’s coming in the next version of ASP.NET
e.g. not vulnerable:
<asp:Repeater ID="Repeater1" runat="server">
<ItemTemplate>
<asp:TextBox ID="txtYourField" Text='<%# Bind("YourField") %>'
runat="server"></asp:TextBox>
</ItemTemplate>
</asp:Repeater>
vulnerable:
<asp:Repeater ID="Repeater2" runat="server">
<ItemTemplate>
<%# Eval("YourField") %>
</ItemTemplate>
</asp:Repeater>
A4-Insecure Direct Object References
MVC model binding can allow parameters added to POST data to be mapped onto the a data model. This can happen unintentionally as the developer hasn't realised that a malicious user may amend parameters in this way. The Bind attribute can be used to prevent this.
A5-Security Misconfiguration
There are many configuration options that can weaken the security of an application. For example setting customErrors to On or enabling trace.
Scanners such as ASafaWeb can check for this common misconfigurations.
A6-Sensitive Data Exposure
Default Hashing
The default password hashing methods in ASP.NET are sometimes not the best.
HashPasswordForStoringInConfigFile() - this could also be bad if it is used to hash a plain password with no added salt.
Article "Our password hashing has no clothes" regarding the ASP.NET membership provider in .NET 4.
A7-Missing Function Level Access Control
Failure to Restrict URL Access
In integrated pipeline mode .NET can see every request and handles can authorise each request, even to non .NET resources (e.g. .js and images). However, if the application i running in classic mode, .NET only sees requests to files such as .aspx so other files may be accidentally unsecured. See this answer for more detail on the differences.
e.g. www.example.com/images/private_photograph_user1.jpg is more likely to be vulnerable in an application that runs in classic mode, although there are workarounds.
A8-Cross-Site Request Forgery (CSRF)
Although the legacy web forms applications are usually more secure against CSRF due to requiring the attacker to forge the View State and Event Validation values, newer MVC applications could be vulnerable unless the developer has manually implemented anti forgery tokens. Note I am not saying that web forms is not vulnerable, just that it is more difficult that simply passing on a few basic parameters - there are fixes though, such as integrating the user key into the View State value.
When the EnableEventValidation property is set to true, ASP.NET validates that a control event originated from the user interface that was rendered by that control. A control registers its events during rendering and then validates the events during postback or callback handling. For example, if a list control includes options numbered 1, 2, or 3 when the page is rendered, and if a postback request is received specifying option number 4, ASP.NET raises an exception. All event-driven controls in ASP.NET use this feature by default.
[EnableEventValidation] feature reduces the risk of unauthorized or malicious postback requests and callbacks. It is strongly recommended that you do not disable event validation.
A10-Unvalidated - Redirects and Forwards
Adding code such as
Response.Redirect(Request.QueryString["Url"]);
will make your site vulnerable. The attack could be initiated by sending a phishing email to a user containing a link. If the user is vigilant they may have double checked the domain of the URL before clicking. However, as the domain will match your domain which the user trusts, they will click the link unaware that the page will redirect the user to the attacker's domain.
Validation should take place on Url to ensure that it is either a relative, allowed URL or an absolute URL to one of your own allowed domains and pages. You may want to check someone isn't redirecting your users to /Logout.aspx for example. Although there may be nothing stopping an attacker from directly linking to http://www.example.com/Logout.aspx, they could use the redirect to hide the URL so it is harder for a user to understand which page is being accessed (http://www.example.com/Redirect.aspx?Url=%2f%4c%6f%67%6f%75%74%2e%61%73%70%78).
Others
The other OWASP categories are:
A9-Using Components with Known Vulnerabilities
of which I can't think of any to mind that are specific to C#/ASP.NET. I'll update my answer if I think of any (if you think they are relevant to your question).
Process.Start is the first one to come to mind.
I am sure that WindowsIdentity and much of System.Security can also be used for evil.
Of course, there are SQL injection attacks, but I don't think that's what you mean (though remote execution can occur through SQL Server).
Aside from the obvious Process.Start() already mentioned, I can see a couple of ways of potential indirect exploitation.
WinAPI calls via PInvoke to CreateProcess() and whatnot.
Any sort of dynamic assembly loading mechanism using Assembly.Load() and other such overloads. If a compromised assembly made it to the system and loaded.
If running in full trust in general.
With the right permissions, any registry operations could put you at risk.
That's all that comes to mind right now.
IMO: The nr 1 exploitable functions, are innocent looking, but very dangerously when used without thought.
In ASP.Net Response.Write or the shortcut:
<%= searchTermFromUser %>
In ADO.Net:
The string + operator:
var sql = "SELECT * FROM table WHERE name = '" + searchTermFromUser + "'"
Any piece of data you get from the user (or any other external source) and pass to another system or another user is a potential exploit.
If you get a string from the user and display it to another user without using HtmlEncode it's a potential exploit.
If you get a string from the user and use it to construct SQL it's a potential SQL injection.
If you get a string from the user and use it to contract a file name for Process.Start or Assembly.Load it's a remote execution vulnerability
You get the point, the danger comes from using unsanitized data, if you never pass user input to external system without sanitizing it (example: HtmlEncode) or using injection-safe interfaces (example: SQL parameters) you are relatively safe - the minute you forget to sanitize something the most innocent-looking method can become a security vulnerability.
Note: cookies, html headers and anything else that passes over a network is also data from the user, in most cases even data in your database is data from the user.
Plenty of things in the System.Net, System.XML, System.IO, (anything that takes a URI and/or file path and actually deals with the resource they identify) System.Reflection, System.Security, System.Web, System.Data and System.Threading namespaces can be dangerous, as in they can be used to do bad things that go further than just messing up the current execution. So much that it would be time consuming to try to identify each.
Of course, every method in all third party assemblies will have to assumed to be dangerous until shown otherwise. More time consuming again.
Nor do I think it's a particularly fruitful approach. Producing a checklist of functions only really works with a limited library, or with a large-language where a lot of what would be in a library with a language like C# is in the language itself.
There are some classically dangerous examples like Process.Start() or anything that executes another process directly, but they are balanced by being quite obviously dangerous. Even a relatively foolhardy and incompetent coder may take care when they use that, while leaving data that goes to other methods unsanitised.
That sanitation of data is a more fruitful thing to look at than any list of functions. Is data validated to remove obviously incorrect input (which may be due to an attack, or may simply be a mistake) and is it encoded and escaped in the appropriate way for a given layer (there is too much talk about "dangerous" character sequences, ' never hurt anyone, it's ' not correctly escaped for SQL, that can hurt when it is indeed at a SQL layer - the job required to make sure the data gets in there correctly is the same as that to avoid exploits). Are the layers at which communication with something outside of the code solid. Are URIs constructed using unexamined input, for example - if not you can turn some of the more commonly used System.Net and System.XML methods into holes.
Using any type of unsafe code can cause problems such as pointers. Microsoft provided a good article about unsafe code here: http://msdn.microsoft.com/en-us/library/aa288474(VS.71).aspx
Reflection.Emit and CodeDom
Edit:
Allowing plugins or third party libraries that uses threading can bring your whole application down unless you load those libraries/plugins in a separate appdomain.
Probably half the framework contains very scary functions. I myself think that File.WriteAllText() is very scary since it can overwrite any file the current user has access to.
A different approach to this question would be how you can manage security. The article at http://ondotnet.com/pub/a/dotnet/2003/02/18/permissions.html contains a basic description concerning the .NET security system, with the System.Security.Permissions namespace containing all permissions .NET makes available. You can also take a look at http://msdn.microsoft.com/en-us/library/5ba4k1c5.aspx for more information.
In short, .NET allows you to limit the permissions a process can have, for example denying methods that change data on disk. You can then check these permissions and act on whether the process has them or not.
even a simple string comparison can be an issue.
If an application makes a trust
decision based on the results of this
String.Compare operation, the result
of that decision could be subverted by
changing the CurrentCulture
Take a look at the example. Fairly easy to miss
I've seen code where the user could set the name and parameters for a function call in a database. The system would then execute the named function through Reflection without checking anything ...