I'm using Fortify static code analyzer with a C#/.NET project. I'm taking an integer parameter, a year, from user input and starting a process with that:
int y = int.Parse(Year.SelectedValue); //Year is a DropDownList
if (y >= 2017 && y <= DateTime.Today.Year)
Process.Start(new ProcessStartInfo(Server.MapPath("~/bin/SomeProgram.exe"), "/x:" + y.ToString()));
Fortify doesn't like that, throws a "Command Injection" issue:
Data enters the application from an untrusted source.
In this case the data enters at get_SelectedValue() in ccc.aspx.cs at
line 25. Even though the data in this case is a number, it is
unvalidated and thus still considered malicious, hence the
vulnerability is still reported but with reduced priority values.
The data is used as or as part of a string representing a command that is executed by the application.
In this case the command is executed by ProcessStartInfo() in
ccc.aspx.cs at line 28.
There are literally two possible values of input that would cause the process to start (as of this writing) - 2017 and 2018. If the if() statement doesn't count as validation for Fortify, what would?
EDIT: on top of everything, unless you explicitly opt of ASP.NET's ViewState integrity check, DropDownList doesn't allow values outside of the assigned range. With this in mind, I don't see why SelectedValue of a DropDownList is treated as an untrusted source in the first place.
Mark it as a false positive and move on.
I don't think Fortify takes the datatype into account. You are taking the value out of a string to and int, doing validation, then using the int value not the original. So as far as the command injection goes not an issue (in this case).
--
What constitutes a validation?
When it comes to Fortify, there is a difference between what constitutes validation and what will make Fortify stop reporting on it.
Unfortunately, there are some cases (as far as I have found from my time 5+ years of using Fortify) that you just cannot make it happy without writing a custom rule for the analyzer to indicate that some method is cleansing the data.
Related
This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 1 year ago.
I'm building an application with a react.js front-end (although I'm fairly sure that's not relevant to the issue) and an asp.net back-end (targeting net5.0, not sure if that's relevant). When I call to the back-end API, I get back an object that includes, in part, an ID it generated based on the data passed in, that is of type long in C# (a 64 bit int). The behavior that I'm seeing is that the variable in C# and the variable read from the response on the front end are different. They appear to drift after about 16-17 digits.
Is this expected? Is there any way to get around it?
Code to reproduce / Pictures of what I see:
C#
[HttpPost]
[Route("test")]
public object TestPassingLongInObject()
{
/* actual logic omitted for brevity */
var rv = new
{
DataReadIn = new { /* omitted */ },
ValidationResult = new ValidationResult(),
GeneratedID = long.Parse($"922337203{new Random().Next(int.MaxValue):0000000000}") // close to long.MaxValue
};
Console.WriteLine($"ID in C# controller method: {rv.GeneratedID}");
return rv;
}
Console output: ID in C# controller method: 9223372030653055062
Chrome Dev Tools:
When I try to access the ID on the front-end I get the incorrect ID ending in 000 instead of 062.
Edit 1: It's been suggested that this is because JavaScript's Number.MAX_SAFE_INTEGER is less than the value I'm passing. I don't believe that is the reason, but perhaps I'm wrong and someone can enlighten me. On the JS side, I'm using BigInt, precisely because the number I'm passing is too large for Number. The issue is before I'm parsing the result to a JS object though (unless Chrome is doing that automatically, leading to the picture referenced in the issue).
Edit 2: Based on the answers below, it looks like perhaps JS is parsing the value to a number before I'm parsing it to a BigInt, so I'm still losing precision. Can someone more familiar with the web confirm this?
JavaScript is interpreting that value as a number which is a 64-bit floating-point number. If you want to keep that value, you're better off passing it back as a string.
Example JS to demonstrate the problem:
var value = 9223372030653055062;
console.log(value);
console.log(typeof(value));
JavaScript engines will (apparently) fail to create numbers this big. Try to open your console and enter the following:
var myLong = 9223372030653055062;
console.log(myLong);
You can check 9223372030653055 > Number.MAX_SAFE_INTEGER as well, as a pointer that you're going out of bounds.
The maximum size of a Number in Javascript is 2^53 - 1 which is 9007199254740991. In C# a long is 2^64 - 1, which is bigger. So "9223372030653055062" works as a C# long but not as a JavaScript Number because it is too big. If this ID is being used in a database using a long, I'd suggest you just pass it as a string to JavaScript.
Although the reason is that the number is bigger than JS can accurately represent, you seem to have been distracted be Chrome's Dev Tools Preview. This is only a "helpful" preview and not necessarily the truth.
The dev tools preview shows a helpful view of the response. I presume the AJAX response is transferred with content type of application/json, so Chrome helpfully parses the JSON to a JS object and lets you preview the response. Check the Response tab, this is what will actually be received by your AJAX code.
The chances are that the JSON is being parsed before you have chance to use BigInt on that number, capping the precision. You haven't shown us your AJAX code so this is my best guess.
The only solution would be to serialize it as a string then add a manual step to convert the string to BigInt.
jsObject.generatedId = BigInt(jsObject.generatedId);
If you're using it as an ID then you might as well keep it as a string on the client-side as you're not likely to be using it to do any calculations.
Supose that I have to create an object which the use can set many vaules, for example 10 values. And this object has another values that are set by the system according to the values given by the uesr.
Is is a good idea create all the test doing all the possible combinations of the values that the user can set? Because in this case, if a value can have 5 possible values, another 3, another 6... all the combinations can be so much although for a small method.
Another case is a property that is set by the system and it doesn't depend on the values that can set the user, for example, the date of modification. For example:
MyClass
{
Property01;
Property02;
Property03;
ModificationDate;
}
method01(int param01, int param02, paramObject)
{
paramObject.ModificationDate = DateTime.Now;
if(param01 < 0 && param02 > 0)
{
paramObject.ModificationDate = new DateTime(2000, 01, 01);
}
}
In this case, if I don't test all the possibles values of the int, or at least when it is lower than 0 and bigger than 0, I couldn't find the error, becase supone that I test only a case where param01 is bigger than 0, I wouldn't be able to detect the error.
But this case is easy, what would happen if I have more parameters, or more possible values for the parameters? The combination would be so big really.
In general, I would like to know some common practice to test objects that can have many values, because at the beggining I started to considerate to try all the possibles combination of all possibles values of the parameters, but in some cases it amount of work that makes me think if there is not a better aproach.
Thanks.
The number of combinations may become huge, but most likely only if you design your test cases with a black box approach.
In practice, code that has to distinguish between different parameter constellations typically follows a divide-and-conquer approach: Early in the code some fundamental distinctions are made.
For example, functions often check first whether the input arguments are in a valid range. The cases where there are invalid arguments are then handled separately (sometimes causing exceptions, or leading to an early exit of the function).
When doing glass box testing you can benefit from the knowledge about this approach: You don't have to create a cartesian product of all inputs with all combinations of valid and invalid inputs: For testing whether invalid inputs are detected and handled correctly you have test cases focusing on one input at a time: To test if invalid data for input i1 is detected, you provide all other inputs with valid data and only i1 gets invalid data. Similarly for the other inputs. Thus, you can ignore combinations like 'i1' invalid and i2 invalid and i3 invalid.
Such glass box knowledge can help to reduce the amount of test cases tremendously. It comes at a price, however: When you change your implementation, you will also have to adjust your test cases. Moreover, with glass box testing you may overlook some scenarios of relevance. But, all in all its a tradeoff to end up with an amount of test cases that can be handled.
I'm not authorized to show the code but I have a problem:
When using the recording feature of CUIT on VS 2015, The test yields an error part way through the playback.
A date entry field is a masked input string field like this "MM/DD/YYYY HH:MM". You can type the values freely into the field. The issue is when doing playback, CUIT attempted to enter the string value of what is captured in the control's final state as "05/09/2017 12:42". The "/" and ":" of the string's value causes the cursor to tab through the masked input, resulting in an erroneous entry. The actual string required to account for all of the tabbing is literally "05///09///2017 12::42" but when I use that hard-coded value, it errors out while attempting to check for the longer version. States that it can't set the control to that value.
Is there a way to tell the CUIT to evaluate an overridden value so that it doesn't try to enter the string stored within the control which contains "/" and ":"?
You need to modify the value in the ...ExpectedValues class that holds the recorded date-time. Coded UI sends the recorded characters (or more accurately, the values from the ...ExpectedValues class) to the application and the application you are testing adds the / and : characters in the approprate places. The Coded UI recorder records both the entered and the generated characters.
Change the recorded 05/09/2017 12:42 value to be 05092017 1242. This can be done via the UI Map editor if the same date-time is always needed. Commonly the date-times are provided via the data source of a data driven test, or they are generated by the test itself. In either case it should be easy to provide data without the / and : or to add code to remove them before they are used. The wanted values are then written, when the test runs, into the ...ExpectedValues class.
See here for some additional notes on the ...ExpectedValues class and on data driving tests.
I am creating a Calculator in Windows Store Application. I have successfully created the app in the store.
Now there is a problem in my app, after getting the result from performing any operation whenever I press on any numeric value, that value got append in the existing value.
In the following snapshot: I have added two numbers (1,1):
Now I am entering another value to perform some other option, but the new value got append in the existing value. I am entering 1 here:
What is the code for removing the existing value, if any numeric values pressed?
you could declare a bool value which is false and when you have your calculation done you switch it to true. Then you write a method that checks if the calculation is done or not and if it's done you simply clear the (i guess you use a textblock / box?) output. That would be my way in this situation - maybe there is a better solution for you. I hope it helps you to get a clearer way in mind.
As the author of the Windows Calculator that shipped from Windows 3.0 through Windows Vista, I agree with user3645029's response. You need to work out the input model for the app, so you understand clearly when you begin entering a new number and when you append to the one showing. I suspect that your app logic isn't making this distinction.
Let me be more specific:
If the key pressed is a number and the last key pressed was a number, then you add that new digit, which effectively means multiplying the current value by 10 and then adding the new key.
If the key pressed is a number and the last key pressed was an operator, =, or similar keys, then you're starting a new number input and your current value should be reset to 0 first.
In short, writing a calculator app requires an internal state machines that understands how to proceed from one input to the next. From what you describe, it sounds like you're missing the logic for the = key. Generally speaking, hand-held calculators with an = sign effectively clear the current value if you start entering a new number after =. Only if you press an operator does that value persist, and in that case you're also starting a new current value and keeping the "2" in your case as the first operand.
So, we're using SubSonic as our DAL/ORM for one of our projects. Everything has been running smoothly (the database already existed, so we used SubSonic on top of it), however, on occasion we'll run into an exception that says something like the integer is exceeding the max length. In this example, our MySql field is an int(4) signed. Which, according to MySql documentation will allow a range of the following:
-2147483647 to 2147483647.
Now, my question, how is MaxLength dictated in SubSonic? Is it the number of digits? Because that means it would only allow -9999 to 9999, correct? That seems like a rather huge discrepancy, and I'm hoping that isn't the case, or else we're going to have a ton of other problems.
Thanks,
-Steve
Using Reflector, and drilling down to ActiveRecord's Save function (which calls ValidateColumnSettings):
if ((!flag && (column.MaxLength > 0)) && ((column.DataType != DbType.Boolean) && (currentValue.ToString().Length > column.MaxLength)))
{
Utility.WriteTrace(string.Format("Max Length Exceeded {0} (can't exceed {1}); current value is set to {2}", column.ColumnName, column.MaxLength, currentValue.ToString().Length));
this.errorList.Add(string.Format(this.LengthExceptionMessage, str, column.MaxLength));
}
flag is set to true if the variable is null. So, yes, it's going off of the number of digits (see: ToString().Length). This doesn't seem to make any sense, since MySql doesn't use the length property of the data type to determine the number of digits for integer based values.
This is SubSonic 2.2.