Here where I work they use an application called checkmarx to analyze the security of the application
In one of these analyzes the application detected the following problems:
Reflected XSS All Clients:
The application's GetBarcosNaoVinculados embeds untrusted data in the
generated output with Json, at line 1243 of
.../Controllers/AdminUserController.cs. This untrusted data is
embedded straight into the output without proper sanitization or
encoding, enabling an attacker to inject malicious code into the
output. The attacker would be able to alter the returned web page by
simply providing modified data in the user inputusuarioId, which is
read by the GetBarcosNaoVinculados method at line 1243 of
.../Controllers/AdminUserController.cs. This input then flows through
the code straight to the output web page, without sanitization.
public JsonResult GetBarcosNaoVinculados(string usuarioId)
.....
.....
return Json(barcosNaoVinculados, JsonRequestBehavior.AllowGet)
Elsewhere in the system it gives the same problem but with these two methods
The application's LoadCodeRve embeds untrusted data in the generated
output with SerializeObject, at line 738 of
.../BR.Rve.UI.Site/Controllers/InfoApontamentoController.cs. This
untrusted data is embedded straight into the output without proper
sanitization or encoding, enabling an attacker to inject malicious
codeinto the output.The attacker would be able to alter the returned
web page by saving malicious data in a data-store ahead oftime. The
attacker's modified data is then read from the database by the Buscar
method with Where, at line 78 of .../Repository/Repository.cs. This
untrusted data then flows through the code straight tothe output web
page, without sanitization.
public virtual IEnumerable<TEntity> Buscar(Expression<Func<TEntity, bool>>predicate)
return Dbset.Where(predicate);
public string LoadCodeRve()
return JsonConvert.SerializeObject(items);
It seems that it has to do with the treatment given to the JSON format, would anyone know how to treat this type of problem?
As the warning message indicates, you need to perform either some form of input validation (or sanitization), and also as a secure coding best practice - output encoding before rendering the output into the page. Checkmarx searches for the existence of these "sanitizers" and these are predefined in their Checkmarx query. One for instance is the use of the AntiXSS libraries (i.e. JavascriptEncode function)
The two critical lines to look out for is already pointed out by Checkmarx:
return Json(barcosNaoVinculados, JsonRequestBehavior.AllowGet)
and
return JsonConvert.SerializeObject(items);
whichever pages these values (JSON or String) are going to end up, they needed to be escaped. Now depending on the templating engine you are using, you might already get instant XSS protection. For example, "The Razor engine used in MVC automatically encodes all output sourced from variables, unless you work really hard to prevent it doing so." and unless of course you used the Html.Raw helper method.
As promoters of application security we believe in not trusting the input and having layers of defenses so my suggestion is to explicitly indicate that you want to encode the output by passing in JsonSerializerSettings argument:
return JsonConvert.SerializeObject(items, new JsonSerializerSettings { StringEscapeHandling = StringEscapeHandling.EscapeHtml });
The only dilemma here is that Checkmarx might not recognize this is as a sanitizer because it may not be in their predefined list of sanitizers. You could always present this solution as an argument to the Security team that is running the Security scans
For the case of the JsonResult return, you may want to javascript encode the barcosNaoVinculados variable:
return Json(HttpUtility.JavaScriptStringEncode(barcosNaoVinculados), JsonRequestBehavior.AllowGet)
Now, this too Checkmarx may not recognize. You can try using the ones that Checkmarx recognizes (i.e. Encoder.JavascriptEncode or AntiXss.JavascriptEncode) but I don't think these Nuget packages will work in your project type
Related
The methods in the .NET platform's DirectorySecurity namespace (e.g. GetAccessRules()) are far too slow for my purposes. Instead, I wish to directly query the NTFS $Secure metafile (or, alternatively, the $SDS stream) in order to retrieve a list of local accounts and their associated permissions for each file system object.
My plan is to first read the $MFT metafile (which I've already figured out how to do) - and then, for each entry therein, look up the appropriate security descriptor in the metafile (or stream).
The ideal code block would look something like this:
//I've already successfully written code for MFTReader:
var mftReader = new MFTReader(driveToAnalyze, RetrieveMode.All);
IEnumerable<INode> nodes = mftReader.GetNodes(driveToAnalyze.Name);
foreach (NodeWrapper node in nodes)
{
//Now I wish to return security information for each file system object
//WITHOUT needing to traverse the directory tree.
//This is where I need help:
var securityInfo = GetSecurityInfoFromMetafile(node.FullName, node.SecurityID);
yield return Tuple.Create(node.FullName, securityInfo.PrincipalName, DecodeAccessMask(securityInfo.AccessMask));
}
And I would like my output to look like this:
c:\Folder1\File1.txt jane_smith Read, Write, Execute
c:\Folder1\File1.txt bill_jones Read, Execute
c:\Folder1\File2.txt john_brown Full Control
etc.
I am running .NET version 4.7.1 on the Windows 10.
There's no API to read directly from $Secure, just like there is no API to read directly from $MFT. (There's FSCTL_QUERY_FILE_LAYOUT but that just gives you an abstracted interpretation of the MFT contents.)
Since you said you can read $MFT, it sounds like you must be using a volume handle to read directly from the volume, just like chkdsk and similar tools. That allows you to read whatever you want provided you know how to interpret the on-disk structures. So your question reduces to how to correctly interpret the $Secure file.
I will not give you code snippets or exact data structures, but I will give you some very good hints. There are actually two approaches possible.
The first approach is you could scan forward in $SDS. All of the security descriptors are there, in SecurityId order. You'll find there's at various 16-byte aligned offsets, there will be a 20-byte header that includes the SecurityId among other information, and following that there's the security descriptor in serialized form. The SecurityId values will appear in ascending order in $SDS. Also every alternate 256K region in $SDS is a mirror of the previous 256K region. To cut the work in half only consider the regions 0..256K-1, 512K..768K-1, etc.
The second approach is to make use of the $SII index, also part of the $Secure file. The structure of this is a B-tree very similar to how directories are structured in NTFS. The index entries in $SII have SecurityId as the index for lookups, and also contain the byte offset you can go to in $SDS to find the corresponding header and security descriptor. This approach will be more performant than scanning $SDS, but requires you to know how to interpret a lot more structures.
Craig pretty much covered everything. I would like to clear some of them. Like Craig, no code here.
Navigate to the node number 9 which corresponds to $Secure.
Get all the streams and get all the fragments of the $SDS stream.
Read the content and extract each security descriptor.
Use IsValidSecurityDescriptor to make sure the SD is valid and stop when you reach an invalid SD.
Remember that the $Secure store the security descriptors in self-relative format.
Are you using FSCTL_QUERY_FILE_LAYOUT? The only real source of how to use this function I have found is here:
https://wimlib.net/git/?p=wimlib;a=blob;f=src/win32_capture.c;h=d62f7d07ef20c08c9bec93f261131033e39b159b;hb=HEAD
It looks like he solves the problem with security descriptors like this:
He gets basically all information about files from the MFT, but not security descriptors. For those he gets the field SecurityId from the MFT and looks in a hash table whether he already has a mapping from this ID to the ACL. If he has, he just returns it, otherwise he uses NtQuerySecurityObject and caches it in the hash table. This should drastically reduce the amount of calls. It assumes that there are few security descriptors and that the SecurityID field correctly represents the single instancing of the descriptors
I struggle with safely encoding html-like text in json. The text should be written into a <textarea>, transferred by ajax to the server (.net45 mvc) and stored in a database in a json-string.
When transferring to server, I get the famous "A potentially dangerous Request.Form value was detected" 500 server error. To avoid this message, I use the [AllowHtml] attribute on the model that are transferred. By doing so I open up for XSS-vulnerability, in case anyone paste in { "key1": "<script>alert(\"danger!\")</script>" }. As such, I would like to use something like
tableData.Json = AntiXssEncoder.HtmlEncode(json, true);
Problem is I cannot do this on the full json string, as it will render something like
{
"key1": ...}
which of course is not what I want. It should be more like
{ "key1": "<script>alert("danger!")</script>" }
With this result the user can write whatever code they want, but I can avoid it to be rendered as html, and just display it as ordinary text. Does anyone know how to traverse json with C# (Newtonsoft Json.NET) such that strings can be encoded with AntiXssEncoder.HtmlEncode(... , ....);? Or am I on a wrong track here?
Edit:
The data is non-uniform, so deserialization into uniform objects is not an option.
The data will probably be opened to the public, so storing the data encoded would ease my soul.
If you already have the data as a JSON string, you could parse it into proper objects with something like Json.NET using JsonConvert.DeserializeObject() (or anything else, there are actually quite a few options to choose from). Once it's plain objects, you can go through them and apply any encoding you want, then serialize them again into a JSON string. You can also have a look at this question and its answers.
Another approach that you may take is just leave it alone until actually inserting stuff into the page DOM. You can store unencoded data in the database, you can even send it to the client without HTML encoding as JSON data (of course it needs to be encoded for JSON, but any serializer does that). You need to be careful not to generate it this way directly into the page source though, but as long as it's an AJAX response with text/json content type, it's fine. Then on the client, when you decide to insert it into the actual textarea, you need to make sure you insert it as text, and not html. Technically this could mean using jQuery's .text() instead of .html(), or your template engine's or client-side data binding solution's relevant method (text: instead of html: in Knockout, #: instead of #= in say Kendo UI, etc.)
The advantage of this is latter approach is that when sending the data, the server (something like an API) does not need to know or care about where or how a client will use the data, it's just data. The client may need different encoding for an HTML or a Javascript context, the server cannot necessarily choose the right one.
If you know it's just that text area though where this data is needed, you can of course take the first (your original) approach, encode it on the server, that's equally good (some may argue that's even better in that scenario).
The problem with answering this question is that details count a lot. In theory, there are a myriad of ways you could do it right, but sometimes a good solution differs from a vulnerable one in one single character.
So this is the solution I went for. I added the [AllowHtml] attribute in the ViewModel, so that I could send raw html from the textarea (through ajax).
With this attribute I avoid the System.Web.HttpRequestValidationException that MVC gives to protect against XSS dangers.
Then I traverse the json-string by parsing it as a JToken and encode the strings:
public class JsonUtils
{
public static string HtmlEncodeJTokenStrings(string jsonString)
{
var reconstruct = JToken.Parse(jsonString);
var stack = new Stack<JToken>();
stack.Push(reconstruct);
while (stack.Count > 0)
{
var item = stack.Pop();
if (item.Type == JTokenType.String)
{
var valueItem = item as JValue;
if(valueItem == null)
continue;
var value = valueItem.Value<string>();
valueItem.Value = AntiXssEncoder.HtmlEncode(value, true);
}
foreach (var child in item.Children())
{
stack.Push(child);
}
}
return reconstruct.ToString();
}
}
The resulting json-string will still be valid and I store it in DB. Now, when printing it in a View, I can use the strings directly from json in JS.
When opening it again in another <textarea> for editing, I have to decode the html entities. For that I "stole" some js-code (decodeHtmlEntities) from string.js; of course adding the licence and credit note.
Hope this helps anyone.
I have my application and from the security testing team I got a bug reported about the possibility for a user to inject malicious code from our forms inputs.
The application is developed in ASP.NET MVC4, .NET 4.5 and EF 5.
The attack being tested is like any usual html being entered, but instead of using the regular < or >, my coworker is using < and > (the fullwidth unicode versions of the previous characters: here for the full list). MVC lets these characters get through and then, somehow, the ORM removes the "wide" portion of the character and leaves the standard and plain characters get into the DB. Needless to say that if not correctly encoded in the output of a view, the retrieval and rendering of these characters can lead to XSS vulnerations.
What I need now is a way to sanitize and perform a Normalize() of all the strings being submitted in any form in the application.
Some people told me to create a custom model binder, but in the BindModel method I couldn't find a spot to modify the fields so later, the framework could recognize the cleansed values and recognize the injection.
Any suggestion will be much appreciated.
You can provide your custom request validation instead.
public class NormalizingRequestValidator : RequestValidator
{
protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex)
{
return base.IsValidRequestString(context, value.Normalize(NormalizationForm.FormKC), requestValidationSource, collectionKey, out validationFailureIndex);
}
}
In web.config
<system.web>
<httpRuntime targetFramework="4.5" requestValidationType="YourNamespace.NormalizingRequestValidator, YourAssembly" />
</system.web>
If also want to normalize the string you receive in your controllers, implement a custom ValueProviderFactory. See What’s the Difference Between a Value Provider and Model Binder?
Note: if you choose to only implement a ValueProviderFactory, you will have to call RequestValidator.Current to manually validate the normalized string.
I'm concerned about the predictability of my application in handling string input in different cultures. It has been a problem in older software and I don't want it to be a problem in the new.
I have generally two sources of input; Strings entered into a WPF application and Streams, loaded from files, containing text. These cultured strings are generally entered into an model before being used
public struct MyModel
{
public String Name;
}
I want to design a meaningful test to ensure some logic can actually handle Result DoSomething(MyModel model); when it contains text inputted on a different machine.
But how can I show a case where the difference matters?
For example the following fails.
var inNativeCulture= "[Something12345678.9:1] {YeS/nO}";
var inChineseCulture = inNativeCulture.ToString(new CultureInfo("zh-CN"));
Assert.That(inChineseCulture, Is.Not.EqualTo(inNativeCulture));
[Question]
How can I test DoSomething such that the test is able to fail if the strings are not converted to InvarientCulture?
Should I even bother? i.e. the string Something entered on a french keyboard will always equal Something entered on a Chinese keyboard?
What can I test for that will mitigate Globalization problems?
The ToString method taking a IFormatProvider on a string is essentially a no-op. The documentation states "Returns this instance of String; no actual conversion is performed."
Since you are concerned about avoiding issues here's some general advice. First it is very helpful to have a clear distinction in your mind between frontend (user facing) strings and backend (database, wire, file, etc) strings. Frontend strings should be generated/accepted according to the user's culture / application language. These strings should not be persisted (with few exceptions like when you are generating a document that will be read only by people and not by machine). Backend strings should always use standard formats that will not change over time. If you accept the fact that the data used to generate/parse globalized strings changes, then you will isolate yourself from the effects by ensuring that you do not persist user facing strings.
say I have a textBox and a property to get and set its value:
public SomeText
{
get { return HttpUtility.HtmlEncode(textBox.Text); }
set { textBox.Text = HttpUtility.HtmlEncode(value); }
}
I have used HtmlEncode to prevent Javascript injection attacks. After thinking about it though I'm thinking I only need the HtmlEncode on the getter. The setter is only used by the system and can not be accessed by an external user.
Is this correct?
A couple points;
First:
You should really only encode values when you display them, and not any other time. By encoding them as you get the value from the box, and also when you paste in, you could end up with a real mess, that will just get worse and worse any time someone edits the values. You should not encode the values (against HTML/Javascript injection - you DO need to protect against SQL injection, of course) upon saving to the database in most cases, especially if that value could later be edited. In such a case, you actually need to decode it upon loading it back... not encode it again. But again; it's much simpler only to encode when displaying it (which includes displaying for editing, btw)
Second:
HtmlEncode protects against injecting HTML - which can include a <script> block which would run Javascript, true. But this also protects against generally malicious HTML that has nothing to do with Javascript. But protecting against Javascript injection is almost a different thing; that is, if you might ever display something entered by the user in, say, a javascript alert('%VARIABLE'); you have to do a totally different kind of encoding there than what you are doing.
Yes. You only need to encode strings that you have accepted from the users and you have to show inside your pages.