I have one C# DLL and one Visual Basic 6 DLL.
In C# there is a field x with data type Decimal.
In VB6 there is a field y with data type Currency.
What would be the best way to pass x to y and back?
Currently I convert the fields to Double, but I am not sure if there are rounding implications.
Update 1:
Based on the helpful advice this is what my code looks now:
public void FromVbToNet(long vb6curr)
{
decimal value = vb6curr / 10000;
}
Problem is, when I try to call this from VB6 via interop, I get a compile error:
"Function or interface marked as restricted, or the function uses an Automation type not supported in Visual Basic"
So how do I declare vb6curr? String, Object, Dynamic?
Update 2:
In case anyone needs this for reference, after further reading I came up with the following solution:
[return: MarshalAs(UnmanagedType.Currency)]
public decimal GetDecimalFromNetDll()
{
decimal value = ... // Read from database
return value;
}
public void SetDecimalInNetDll([MarshalAs(UnmanagedType.Currency)] decimal value)
{
// Save to database
}
I call these from my unmanaged code in VB6 with a Currency parameter and everything seems to work so far.
A VB6 Currency data type is stored as a 64 bit integer, implicitly scaled by 10,000. Armed with that knowledge it is straightforward to convert between that type and .net Decimal.
On the VB6 side you pass the data as Currency. On the C# side you pass it as long. Then on the C# side you scale by 10,000 to convert between your decimal value and the long value.
For example when you have a VB6 Currency value held in a C# long you convert to decimal like this:
long vb6curr = ...;
decimal value = vb6curr / 10000;
In the other direction it would be:
decimal value = ...;
long vb6curr = Convert.ToInt64(value*10000);
After some reading I came up with this solution (see also under Update 2).
I had to marshal the Decimal type in .Net to the Currency type in the unmanaged VB6 code and vice versa.
[return: MarshalAs(UnmanagedType.Currency)]
public decimal GetDecimalFromNetDll()
{
decimal value = ... // Read from database
return value;
}
public void SetDecimalInNetDll([MarshalAs(UnmanagedType.Currency)] decimal value)
{
// Save to database
}
For detailed information see:
http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute%28v=vs.110%29.aspx
Related
Our entire application has been using a long to store large number values.
Like so:
public class SomeClass
{
public long CardNumber { get; set; }
}
This is stored as a bigint in Microsoft SQL Server.
Now in order to account for a value larger than a long's max value I'm changing the datatype to a string and nvarchar in SQL Server (open to a better solution).
We don't seem to be doing much arithmetic with the value already across the application.
But we have code like this:
var someObj = new SomeClass();
someObj.CardNumber = 1234;
So I don't want to have to manually change it to
var someObj = new SomeClass();
someObj.CardNumber = 1234.ToString();
Across the application..
I was thinking of doing something like this..
public class SomeClass
{
private long cardnum;
public string CardNumber
{
get { return cardnum.ToString(); }
set { cardnum = ConvertToLong(value); }
}
}
but what if I then want to set the card number to a value larger than the long max value....as the convert to long would break in the setter
I'm a bit lost as to what I should do here ...
If 28 digits are enough, you could use a C# decimal. A SQL decimal has even a precision of 38 digits. C# long has only 18 digits ulong has 19.
An assignment like someObj.CardNumber = 1234; will continue to work without changes, as the int constant is automatically converted to decimal.
For numbers larger than Int32.MaxValue you must use a decimal constant with the decimal specifier m. someObj.CardNumber = 1234_5678_9012_3456_7890m;. You can also use decimal separators. They are not limited to blocks of 3.
Always encrypt sensitive data!
In SQL Server you could alter the BIGINT column to be of type NUMERIC. The maximum integer length available is NUMERIC(38, 0). One way to store NUMERIC(38, 0) in memory using C# would be to use the (native SQL Server type) SqlDecimal which is implemented as a Struct in System.Data.SqlTypes.
I am just passing a ValueTuple to a function. I want to process the Values within this ValueTuple.
Unfortunately, VS 2017 only allows me to access credit.Item1. No further Items. So far I had no issues with ValueTuples.
The error in the compiler is:
ValueTuple<(string loanID, decimal y, ...)> does not contain a
definition for 'loanID'...
The code is
public void LogCredit(
ValueTuple<(
string loanID,
decimal discount,
decimal interestRate,
decimal realReturn,
decimal term,
int alreadyPayedMonths)>
credit)
{
// not working!
string loanID = credit.loanID;
// this is the only thing i can do:
string loanID = credit.Item1;
// not working either!
decimal realReturn = credit.Item2;
}
Meanwhile, when hovering over credit I can see it correctly:
Any Suggestions?
Your parameter has only one single field Item1 because it is not of type ValueTuple<...>, but of type ValueTuple<ValueTuple<...>>: The single type parameter of the outer ValueTuple is another ValueTuple, where this inner C# tuple now contains your string, decimal and int fields.
Therefore, in your code you have to write string loanID = credit.Item1.loanID; to access those fields.
In order to access your fields as expected, remove the enclosing ValueTuple, leaving just the C# tuple behind:
public void LogCredit((string loanID, decimal discount, decimal interestRate,
decimal realReturn, decimal term, int alreadyPayedMonths) credit)
{
string loanID = credit.loanID;
decimal realReturn = credit.Item4;
}
To utilize named fields of ValueTuple, I prefer the usage of the C# 7 Tuples.
For the sake of completeness, here is a general blog article and an in-depth blog article about tuples in C# 7.
I'm a modest C# programmer in one of the world top insurance companies. I recently recompiled a huge software with .Net framework 4.0.
This is a very large business ASP.Net application containing many thousands of objects; It uses serialization/deserialization in memory with MemoryStream to clone the state of the application (insurance contracts) and pass it on to other modules and to save it at the end.
It worked fine for years. Now sometimes, not systematically, BinaryFormatter.serialize throws the exception
Decimal byte array constructor requires an array of length four containing valid decimal bytes.
After a patient search I identified the field responsible, it is declared as follows:
public class myClass
{
.......
private decimal? uidStampa = null;
public decimal? UIDStampa
{
get
{
if (uidStampa == null)
{
uidStampa = Convert.ToDecimal(strdoc.Substring(5));
}
return uidStampa;
}
set
{
uidStampa = value;
decimal d = (decimal)uidStampa; // NO Complaint
d.SanityCheck("_idstampa set"); //my method to check bits of the decimal
}
}
.........
}
and it is initialized through the setter as
myClass.UIDStampa = id; this id is a plain decimal number
Sometimes for reasons yet to be discovered, this id apparently has a valid value:
If I go over it or with quickwatch I see the value 20000499428,
while if I examine it closer it is formed by
20000499428 = {00000168 00000000 00000004 A81F66E4}
So the fourth number, the scale one, is invalid though it does not effect the decimal point of the decimal. No complaints until serialization. Only the serializer complaints.
If I declare the private variable as NOT NULLABLE but plain decimal as
private decimal uidStampa = 0M;
the error is swallowed, I mean it remains with the fourth number invalid, but it looks like it is not considered.
The questions are:
Why BinaryFormatter.Serialize behaves differently if the private variable is declared nullable or not and the public one is still nullable ?
The serializer of .Net 3.5 behaves differently ?
To discover the root of the evil, so the point where the invalid value is set, what could be a tool to check memory in C# ?
I'll appreciate any suggestion
Marco
I have a function as under
private double RoundOff(object value)
{
return Math.Round((double)value, 2);
}
And I am invoking it as under
decimal dec = 32.464762931906M;
var res = RoundOff(dec);
I am gettingthe below error
Specified cast is not valid
What is the mistake?
Thanks
Casting the object to double will attempt to unbox the object as a double, but the boxed object is a decimal. You need to convert it to a double after first unboxing it. Then you perform the rounding:
Math.Round((double)(decimal)value, 2);
The other answers are correct in terms of getting something that will run - but I wouldn't recommend using them.
You should almost never convert between decimal and double. If you want to use a decimal, you should use Math.Round(decimal). Don't convert a decimal to double and round that - there could easily be nasty situations where that loses information.
Pick the right representation and stick with it. Oh, and redesign RoundOff to not take object. By all means have one overload for double and one for decimal, but give them appropriate parameter types.
As an alternative to John's answer, if you want to use other number types than just decimal, you could use this code;
private double RoundOff(object value)
{
return Math.Round(Convert.ToDouble(value), 2);
}
I'm having some problems with the conversion of value from an OracleDecimal. Here is the code
public static T GetReaderValue(this OracleDataReader dr, string col) where T : struct
{
int colNo = dr.GetOrdinal(col);
var value = (double)OracleDecimal.SetPrecision(dr.GetOracleDecimal(colNo), 28);
return (T) Convert.ChangeType(value, typeof(T), CultureInfo.InvariantCulture);
}
This works fine for most values but for some, like 0.12345, it returns numbers like 0.123499999999.
Can anyone offer some advise on how to convert OracleDecimal without these rounding errors?
Thanks!
System.Double is stored in base 2 rather than base 10. Some numbers that can be represented using a finite number of digits in base 10 require an infinite number of digits in base 2 (and vice-versa).
As the database appears to be storing a decimal number you might be better off converting to a System.Decimal value instead so you don't lose precision (as System.Decimal uses base 10 in the same manner as OracleDecimal).