I'm a modest C# programmer in one of the world top insurance companies. I recently recompiled a huge software with .Net framework 4.0.
This is a very large business ASP.Net application containing many thousands of objects; It uses serialization/deserialization in memory with MemoryStream to clone the state of the application (insurance contracts) and pass it on to other modules and to save it at the end.
It worked fine for years. Now sometimes, not systematically, BinaryFormatter.serialize throws the exception
Decimal byte array constructor requires an array of length four containing valid decimal bytes.
After a patient search I identified the field responsible, it is declared as follows:
public class myClass
{
.......
private decimal? uidStampa = null;
public decimal? UIDStampa
{
get
{
if (uidStampa == null)
{
uidStampa = Convert.ToDecimal(strdoc.Substring(5));
}
return uidStampa;
}
set
{
uidStampa = value;
decimal d = (decimal)uidStampa; // NO Complaint
d.SanityCheck("_idstampa set"); //my method to check bits of the decimal
}
}
.........
}
and it is initialized through the setter as
myClass.UIDStampa = id; this id is a plain decimal number
Sometimes for reasons yet to be discovered, this id apparently has a valid value:
If I go over it or with quickwatch I see the value 20000499428,
while if I examine it closer it is formed by
20000499428 = {00000168 00000000 00000004 A81F66E4}
So the fourth number, the scale one, is invalid though it does not effect the decimal point of the decimal. No complaints until serialization. Only the serializer complaints.
If I declare the private variable as NOT NULLABLE but plain decimal as
private decimal uidStampa = 0M;
the error is swallowed, I mean it remains with the fourth number invalid, but it looks like it is not considered.
The questions are:
Why BinaryFormatter.Serialize behaves differently if the private variable is declared nullable or not and the public one is still nullable ?
The serializer of .Net 3.5 behaves differently ?
To discover the root of the evil, so the point where the invalid value is set, what could be a tool to check memory in C# ?
I'll appreciate any suggestion
Marco
Related
Our entire application has been using a long to store large number values.
Like so:
public class SomeClass
{
public long CardNumber { get; set; }
}
This is stored as a bigint in Microsoft SQL Server.
Now in order to account for a value larger than a long's max value I'm changing the datatype to a string and nvarchar in SQL Server (open to a better solution).
We don't seem to be doing much arithmetic with the value already across the application.
But we have code like this:
var someObj = new SomeClass();
someObj.CardNumber = 1234;
So I don't want to have to manually change it to
var someObj = new SomeClass();
someObj.CardNumber = 1234.ToString();
Across the application..
I was thinking of doing something like this..
public class SomeClass
{
private long cardnum;
public string CardNumber
{
get { return cardnum.ToString(); }
set { cardnum = ConvertToLong(value); }
}
}
but what if I then want to set the card number to a value larger than the long max value....as the convert to long would break in the setter
I'm a bit lost as to what I should do here ...
If 28 digits are enough, you could use a C# decimal. A SQL decimal has even a precision of 38 digits. C# long has only 18 digits ulong has 19.
An assignment like someObj.CardNumber = 1234; will continue to work without changes, as the int constant is automatically converted to decimal.
For numbers larger than Int32.MaxValue you must use a decimal constant with the decimal specifier m. someObj.CardNumber = 1234_5678_9012_3456_7890m;. You can also use decimal separators. They are not limited to blocks of 3.
Always encrypt sensitive data!
In SQL Server you could alter the BIGINT column to be of type NUMERIC. The maximum integer length available is NUMERIC(38, 0). One way to store NUMERIC(38, 0) in memory using C# would be to use the (native SQL Server type) SqlDecimal which is implemented as a Struct in System.Data.SqlTypes.
I use System.Text.Json to deserialize some stuff and then serialize it. The problem is that for example double value 99.6 is being deserialized and then serialized to 99.599999999999994.
What can I do about it?
Here's a reproduction in console app.
using System;
using System.Text.Json;
namespace ConsolePG3
{
class Program
{
static void Main(string[] args)
{
Person person = new Person { Value = 99.6 };
var text = JsonSerializer.Serialize(person);
Console.WriteLine(text);
Console.ReadLine();
}
}
class Person
{
public double Value { get; set; }
}
}
The important thing to get your head around here is that the double with value 99.6 does not exist, and never existed. You imagined it. It was rounded the moment you compiled it. It is simply not possible to represent the exact value 99.6 in floating-point arithmetic, due to how floating-point works. The serializer has correctly serialized the actual value that exists.
If you want to represent discreet values in the way that humans tend to think of them - use decimal instead of floating-point (float, double). It (decimal) is also limited in terms of precision (and it is not CPU-optimized), but the way it approximates is much more comparable to how humans approximate, and it will readily story the exact value for most common scenarios.
Frankly, the moment you are thinking about "the exact value": floating point is not a good choice.
I have one C# DLL and one Visual Basic 6 DLL.
In C# there is a field x with data type Decimal.
In VB6 there is a field y with data type Currency.
What would be the best way to pass x to y and back?
Currently I convert the fields to Double, but I am not sure if there are rounding implications.
Update 1:
Based on the helpful advice this is what my code looks now:
public void FromVbToNet(long vb6curr)
{
decimal value = vb6curr / 10000;
}
Problem is, when I try to call this from VB6 via interop, I get a compile error:
"Function or interface marked as restricted, or the function uses an Automation type not supported in Visual Basic"
So how do I declare vb6curr? String, Object, Dynamic?
Update 2:
In case anyone needs this for reference, after further reading I came up with the following solution:
[return: MarshalAs(UnmanagedType.Currency)]
public decimal GetDecimalFromNetDll()
{
decimal value = ... // Read from database
return value;
}
public void SetDecimalInNetDll([MarshalAs(UnmanagedType.Currency)] decimal value)
{
// Save to database
}
I call these from my unmanaged code in VB6 with a Currency parameter and everything seems to work so far.
A VB6 Currency data type is stored as a 64 bit integer, implicitly scaled by 10,000. Armed with that knowledge it is straightforward to convert between that type and .net Decimal.
On the VB6 side you pass the data as Currency. On the C# side you pass it as long. Then on the C# side you scale by 10,000 to convert between your decimal value and the long value.
For example when you have a VB6 Currency value held in a C# long you convert to decimal like this:
long vb6curr = ...;
decimal value = vb6curr / 10000;
In the other direction it would be:
decimal value = ...;
long vb6curr = Convert.ToInt64(value*10000);
After some reading I came up with this solution (see also under Update 2).
I had to marshal the Decimal type in .Net to the Currency type in the unmanaged VB6 code and vice versa.
[return: MarshalAs(UnmanagedType.Currency)]
public decimal GetDecimalFromNetDll()
{
decimal value = ... // Read from database
return value;
}
public void SetDecimalInNetDll([MarshalAs(UnmanagedType.Currency)] decimal value)
{
// Save to database
}
For detailed information see:
http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute%28v=vs.110%29.aspx
My application deals with percentages a lot. These are generally stored in the database in their written form rather than decimal form (50% would be stored as 50 rather than 0.5). There is also the requirement that percentages are formatted consistently throughout the application.
To this end i have been considering creating a struct called percentage that encapsulates this behaviour. I guess its signature would look something like this:
public struct Percentage
{
public static Percentage FromWrittenValue();
public static Percentage FromDecimalValue();
public decimal WrittenValue { get; set; }
public decimal DecimalValue { get; set; }
}
Is this a reasonable thing to do? It would certianly encapsulate some logic that is repeated many times but it is straightforward logic that peopel are likely to understand. I guess i need to make this type behave like a normal number as much as possible however i am wary of creating implicit conversions to a from decimal in case these confuse people further.
Any suggestions of how to implement this class? or compelling reasons not to.
I am actually a little bit flabbergasted at the cavalier attitude toward data quality here.
Unfortunately, the colloquial term "percentage" can mean one of two different things: a probability and a variance. The OP doesn't specify which, but since variance is usually calculated, I'm guessing he may mean percentage as a probability or fraction (such as a discount).
The extremely good reason for writing a Percentage class for this purpose has nothing to do with presentation, but with making sure that you prevent those silly silly users from doing things like entering invalid values like -5 and 250.
I'm thinking really more about a Probability class: a numeric type whose valid range is strictly [0,1]. You can encapsulate that rule in ONE place, rather than writing code like this in 37 places:
public double VeryImportantLibraryMethodNumber37(double consumerProvidedGarbage)
{
if (consumerProvidedGarbage < 0 || consumerProvidedGarbage > 1)
throw new ArgumentOutOfRangeException("Here we go again.");
return someOtherNumber * consumerProvidedGarbage;
}
instead you have this nice implementation. No, it's not fantastically obvious improvement, but remember, you're doing that value-checking in each time you're using this value.
public double VeryImportantLibraryMethodNumber37(Percentage guaranteedCleanData)
{
return someOtherNumber * guaranteedCleanData.Value;
}
Percentage class should not be concerned with formatting itself for the UI. Rather, implement IFormatProvider and ICustomFormatter to handle formatting logic.
As for conversion, I'd go with standard TypeConverter route, which would allow .NET to handle this class correctly, plus a separate PercentageParser utility class, which would delegate calls to TypeDescriptor to be more usable in external code. In addition, you can provide implicit or explicit conversion operator, if this is required.
And when it comes to Percentage, I don't see any compelling reason to wrap simple decimal into a separate struct other than for semantic expressiveness.
It seems like a reasonable thing to do, but I'd reconsider your interface to make it more like other CLR primitive types, e.g. something like.
// all error checking omitted here; you would want range checks etc.
public struct Percentage
{
public Percentage(decimal value) : this()
{
this.Value = value
}
public decimal Value { get; private set; }
public static explicit operator Percentage(decimal d)
{
return new Percentage(d);
}
public static implicit operator decimal(Percentage p)
{
return this.Value;
}
public static Percentage Parse(string value)
{
return new Percentage(decimal.Parse(value));
}
public override string ToString()
{
return string.Format("{0}%", this.Value);
}
}
You'd definitely also want to implement IComparable<T> and IEquatable<T> as well as all the corresponding operators and overrides of Equals, GetHashCode, etc. You'd also probably also want to consider implementing the IConvertible and IFormattable interfaces.
This is a lot of work. The struct is likely to be somewhere in the region of 1000 lines and take a couple of days to do (I know this because it's a similar task to a Money struct I wrote a few months back). If this is of cost-benefit to you, then go for it.
This question reminds me of the Money class Patterns of Enterprise Application Architecture talks about- the link might give you food for thought.
Even in 2022, .Net 6 I found myself using something just like this. I concur with Michael on his answer for the OP and like to extend it for future Googlers.
Creating a value type would be indispensable in explaining the domain's intent with enforced immutability. Notice especially in the Fraction Record you will get a Quotient that would normally cause an exception however here we can safely show d / 0 with no error, likewise all other inherited children are also granted that protection (It also offers an excellent place to establish simple routines to check validity, data rehydration (as if DBA's don't make mistakes), serialization concerns just to name a few.)
namespace StackOverflowing;
// Honor the simple fraction
public record class Fraction(decimal Dividend, decimal Divisor)
{
public decimal Quotient => (Divisor > 0.0M) ? Dividend / Divisor : 0.0M;
// Display dividend / divisor as the string, not the quotient
public override string ToString()
{
return $"{Dividend} / {Divisor}";
}
};
// Honor the decimal based interpretation of the simple fraction
public record class DecimalFraction(decimal Dividend, decimal Divisor) : Fraction(Dividend, Divisor)
{
// Change the display of this type to the decimal form
public override string ToString()
{
return Quotient.ToString();
}
};
// Honor the decimal fraction as the basis value but offer a converted value as a percentage
public record class Percent(decimal Value) : DecimalFraction(Value, 100.00M)
{
// Display the quotient as it represents the simple fraction in a base 10 format aka radix 10
public override string ToString()
{
return Quotient.ToString("p");
}
};
// Example of a domain value object consumed by an entity or aggregate in finance
public record class PercentagePoint(Percent Left, Percent Right)
{
public Percent Points => new(Left.Value - Right.Value);
public override string ToString()
{
return $"{Points.Dividend} points";
}
}
[TestMethod]
public void PercentScratchPad()
{
var approximatedPiFraction = new Fraction(22, 7);
var approximatedPiDecimal = new DecimalFraction(22, 7);
var percent2 = new Percent(2);
var percent212 = new Percent(212);
var points = new PercentagePoint(new Percent(50), new Percent(40));
TestContext.WriteLine($"Approximated Pi Fraction: {approximatedPiFraction}");
TestContext.WriteLine($"Approximated Pi Decimal: {approximatedPiDecimal}");
TestContext.WriteLine($"2 Percent: {percent2}");
TestContext.WriteLine($"212 Percent: {percent212}");
TestContext.WriteLine($"Percentage Points: {points}");
TestContext.WriteLine($"Percentage Points as percentage: {points.Points}");
}
PercentScratchPad
Standard Output:
TestContext Messages:
Approximated Pi Fraction: 22 / 7
Approximated Pi Decimal: 3.1428571428571428571428571429
2 Percent: 2.00%
212 Percent: 212.00%
Percentage Points: 10 points
Percentage Points as percentage: 10.00%
I strongly recommend you just stick with using the double type here (I don't see any use for the decimal type either, as wouldn't actually seem to require base-10 precision in the low decimal places). By creating a Percentage type here, you're really performing unnecessary encapsulation and just making it harder to work with the values in code. If you use a double, which is customary for storying percentages (among many other tasks), you'll find dealing with the BCL and other code a lot nicer in most cases.
The only extra functionality that I can see you need for percentages is the ability to convert to/from a percentage string easily. This can be done very simply anyway using single lines of code, or even extension methods if you want to abstract it slightly.
Converting to percentage string :
public static string ToPercentageString(this double value)
{
return value.ToString("#0.0%"); // e.g. 76.2%
}
Converting from percentage string :
public static double FromPercentageString(this string value)
{
return double.Parse(value.SubString(0, value.Length - 1)) / 100;
}
I think you may be mixing up presentation and logic here. I would convert the percentage to a decimal or float fraction (0.5) when getting it from the database and then let the presentation deal with the formatting.
I'd not create a separate class for that - this just creates more overhead. I thinkg it will be faster just to use double variables set to the database value.
If it is common knowledge that the database stores percentages as 50 instead of 0.5, everybody will understand statemens like part = (percentage / 100.0) * (double)value.
I'm working with a database that has the limit that the only (numeric) datatype it can store is a double. What I want to do is pick the number for a certain row and put it into an HTTP request. The problem revolves around that I cannot know if this number should or should not have decimals.
For example, if the double is an ID, I cannot have any kind of formatting whatsoever, since the site that gets the HTTP request will be confused. Observe the following examples:
site.com/showid.php?id=12300000 // OK
site.com/showid.php?id=1.23E7 // Bad; scientific notation
site.com/showid.php?id=12300000.0 // Bad; trailing decimal
The solution to this would be to cast it to a long. Ignoring the problem of overflowing the long, it solves the scientific notation and (obviously) trailing decimal. This could be an acceptable solution but it would be nice if the code didn't assume it were IDs we were dealing with. What if, for example, I were to query a site that shows a map and the number are coordinates, where the decimals are very important? Then a cast to long is no longer acceptable.
In short;
If the double has no decimals, do not add a trailing decimal.
If it has decimals, keep them all.
Neither case should have scientific notation or thousand separators.
This solution will be ported to both C# and Java so I accept answers in both languages.
(Oh, and I had no idea what to call this question, feel free to rename if you got something better.)
To complement the answer of gustafc (who beat me by 1 minute), here's the relevant code line for C#:
MyDouble.ToString("0.################")
or
string.Format("{0:0.################}", MyDouble);
Since it is safe to format the value with no trailing zeroes if it is integral (whether it represents an ID or a coordinate), why not just codify the logic you describe in your bullet points? For example (C#, but should translate readily to Java):
// Could also use Math.Floor, etc., to determine if it is integral
long integralPart = (long)doubleValue;
if ((double)integralPart == doubleValue)
{
// has no decimals: format it as an integer e.g. integralPart.ToString("D") in C#
}
else
{
// has decimals: keep them all e.g. doubleValue.ToString("F17")
}
How about encapsulating the number in a custom type?
public class IntelligentNumber
{
private readonly double number;
public IntelligentNumber(double number)
{
this.number = number;
}
public override string ToString()
{
long integralPart = (long)this.number;
if((double)integralPart == this.number)
{
return integralPart.ToString();
}
else
{
return this.number.ToString();
}
}
}
See also Vilx-'s answer for a better algorithm than the one above.
check whether num == round(num)
In Java, you can do this with DecimalFormat.
static String format(double n) {
return new DecimalFormat("0.###########################").format(n);
}
The # placeholders won't show up unless the number something other than zeros to put there, and the decimal point doesn't show up unless there's something following it.
Heres my own conclusion:
Check if the double has decimals.
Depending on that, format the string accordingly.
And then something important; without specifying an invariant culture, the comma in the has-decimals case may be a "," instead of a "." which isnt liked by HTTP requests. Of course, this problem only crops up if your OS is set to a locale that prefers the comma.
public static string DoubleToStringFormat(double dval)
{
long lval = (long)dval;
if ((double)lval == dval)
{
// has no decimals: format as integer
return dval.ToString("#.", CultureInfo.InvariantCulture);
}
else
{
// has decimals: keep them all
return dval.ToString("0.##################", CultureInfo.InvariantCulture);
}
}