Store a 64 bits integer in a Jet engine (Access) database? - c#

How would it be the best / most effective / less memory consuming way to store a 64 bits integer into a Jet Engine database? I'm pretty sure their integers are 32 bits.

The largest integer MSAccess supports is a NUMBER (FieldSize= LONG INTEGER) type
but this is not 64 bits.
http://msdn.microsoft.com/en-us/library/ms714540(v=vs.85).aspx
To store numbers as large as 64 bits you will need to use the DOUBLE or DECIMAL type, but will not have "integer precision" with DOUBLE and you have overhead with DECIMAL
Alternatively you could use a CURRENCY type and disregard decimals.
http://www.w3schools.com/sql/sql_datatypes.asp
For more details on the nuances of all data types you can look here:
http://office.microsoft.com/en-us/access-help/introduction-to-data-types-and-field-properties-HA010233292.aspx
EDIT: Though you will have a limited number of significant digits in DOUBLE as pointed out by #ho1 in the comments below.
You can make CURRENCY work by inferring the digits in code if you are pressed for disk storage space but your best bet is probably DECIMAL

Related

I am reading floating point values from a PLC's 32 bit memory area as (they would be) Integer values. How can I convert this Integer to Single in C#?

I am reading values from a S7-300 PLC with my c# code. When the values are in INT format there is no problem. But there are some 32 bit memory areas (Double Words) that are encoded in IEEE 754 Floating-Point standart. (First bit is sign bit, the next 8 bits exponent, and the remaining 23 bits Mantissa)
I can read out this memory areas from the PLC only as Int32 (As they were integer).
How can I convert this as integer read value to a single Real value in C# with respeect of the IEEE 754 Floating-Point encoding in the double word?
It worked just as wanted with Eldar's answer.
If you read a 32 bit float value as bit, then just convert it like this:
Thanks again to Eldar :-)
var finalSingle= BitConverter.ToSingle(BitConverter.GetBytes(s7Int))

C# Converting Double to Float Loses So Much Precision Why?

When I'm debugging my program using Visual Studio;
When I convert a double variable "91.151497095188446" to a float by using
(float)double_variable
I see resulting float variable is "91.1515".
Why I'm losing so much precision? Do I need to use another data type?
C# and many other languages use IEEE 754 as a specification for their floating-point data types. Floating-point numbers are expressed as a significand and an exponent, similar to how a decimal number, in scientific notation, is expressed as
1.234567890 x 10^12
^ ^
mantissa exponent
I won't go into the details (the Wikipedia article goes into that better than I can), but IEEE 754 specifies that:
for a 32-bit floating point number, such as the C# float data type, has 24 bits of precision for the significand, and 8 bits for the exponent.
for a 64-bit floating point number, such as the C# double data type, has 53 bits of precision for the significand, and 11 bits for the exponent.
Because a float only has 24 bits of precision, it can only express 7-8 digits of precision. Conversely, a double has 53 bits of precision so has about 15-16 digits of precision.
As has been said in the comments, if you don't want to lose precision, don't go from a double (64 bits in total) to a float (32 bits in total). Depending on your application, you could perhaps use the decimal data type which has 28-29 decimal digits of precision - but will come with penalties because (a) calculations involving it are slower than for double or float, and (b) it's typically far less supported by external libraries.
Note that you're talking about 91.15149709518846 which will actually be interpreted as 91.1514970951884 by the compiler - see, for example, this:
double value = 91.151497095188446;
Console.WriteLine(value);
// prints 91.1514970951884
You will find detailed explanation here: https://stackoverflow.com/a/2386882/10863059
Basically, float takes smaller memory space (4 bytes) when compared to double (8 bytes), but that's just one part of the story.
Floating-point types store fractional parts as inverse powers of two. For this reason, they
can only represent exact values such as 10, 10.25, 10.5, and so on. Other numbers,
such as 1.23 or 19.99, cannot be represented exactly and are only an approximation.
Even if double has 15 decimal digits of precision, as compared to only 7 for float,
precision loss starts to accumulate when performing repeated calculations.
This makes double and float difficult or even inappropriate to use in certain types of
applications, such as financial applications, where precision is key. For this purpose, the
decimal type is provided
Reference : Learn C# Programming, A guide to building a solid foundation in C# language
for writing efficient programs By Marius Bancila,
Raffaele Rialdi,
Ankit Sharma

Can casting a long value to double potentially cause a value to be lost [duplicate]

I have read in different post on stackoverflow and in the C# documentation, that converting long (or any other data type representing a number) to double loses precision. This is quite obvious due to the representation of floating point numbers.
My question is, how big is the loss of precision if I convert a larger number to double? Do I have to expect differences larger than +/- X ?
The reason I would like to know this, is that I have to deal with a continuous counter which is a long. This value is read by my application as string, needs to be cast and has to be divided by e.g. 10 or some other small number and is then processed further.
Would decimal be more appropriate for this task?
converting long (or any other data type representing a number) to double loses precision. This is quite obvious due to the representation of floating point numbers.
This is less obvious than it seems, because precision loss depends on the value of long. For values between -252 and 252 there is no precision loss at all.
How big is the loss of precision if I convert a larger number to double? Do I have to expect differences larger than +/- X
For numbers with magnitude above 252 you will experience some precision loss, depending on how much above the 52-bit limit you go. If the absolute value of your long fits in, say, 58 bits, then the magnitude of your precision loss will be 58-52=6 bits, or +/-64.
Would decimal be more appropriate for this task?
decimal has a different representation than double, and it uses a different base. Since you are planning to divide your number by "small numbers", different representations would give you different errors on division. Specifically, double will be better at handling division by powers of two (2, 4, 8, 16, etc.) because such division can be accomplished by subtracting from exponent, without touching the mantissa. Similarly, large decimals would suffer no loss of significant digits when divided by ten, hundred, etc.
long
long is a 64-bit integer type and can hold values from –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 (max. 19 digits).
double
double is 64-bit floating-point type that has precision of 15 to 16 digits. So data can certainly be lost in case your numbers are greater than ~100,000,000,000,000.
decimal
decimal is a 128-bit decimal type and can hold up to 28-29 digits. So it's always safe to cast long to decimal.
Recommendation
I would advice that you find out the exact expectations about the numbers you will be working with. Then you can take an informed decision in choosing the appropriate data type. Since you are reading your numbers from a string, isn't it possible that they will be even greater than 28 digits? In that case, none of the types listed will work for you, and instead you'll have to use some sort of a BigInt implementation.

C#: Decimal -> Largest integer that can be stored exactly

I've been looking at the decimal type for some possible programming fun in the near future, and would like to use it a a bigger integer than an Int64. One key point is that I need to find out the largest integer that I can store safely into the decimal (without losing precision); I say this, because apparently it uses some floating point in there, and as programmers, we all know the love that only floating point can give us.
So, does anyone know the largest int I can shove in there?
And on a separate note, does anyone have any experience playing with arrays of longs / ints? It's for the project following this one.
Thanks,
-R
decimal works differently to float and double - it always has enough information to store as far as integer precision, as the maximum exponent is 28 and it has 28-29 digits of precision. You may be interested in my articles on decimal floating point and binary floating point on .NET to look at the differences more closely.
So you can store any integer in the range [decimal.MinValue, decimal.MaxValue] without losing any precision.
If you want a wider range than that, you should use BigInteger as Fredrik mentioned (assuming you're on .NET 4, of course... I believe there are 3rd party versions available for earlier versions of .NET).
If you want to deal with big integers, you might want to look into the BigInteger type.
From the documentation:
Represents an arbitrarily large signed integer

Why is System.Math and for example MathNet.Numerics based on double?

All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.
Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?
This is a classic speed-versus-accuracy trade off.
However, keep in mind that for PI, for example, the most digits you will ever need is 41.
The largest number of digits of pi
that you will ever need is 41. To
compute the circumference of the
universe with an error less than the
diameter of a proton, you need 41
digits of pi †. It seems safe to
conclude that 41 digits is sufficient
accuracy in pi for any circle
measurement problem you're likely to
encounter. Thus, in the over one
trillion digits of pi computed in
2002, all digits beyond the 41st have
no practical value.
In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.
Also consider:
System.Double 8 bytes Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal 12 bytes Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures
As you can see, decimal has a smaller range, but a higher precision.
No, - decimals are no more "exact" than doubles, or for that matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.
Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers.
Neither is any more exact than than the other in general, they are simply for different purposes.
To elaborate a bit furthur, since I seem to not be clear enough for some readers...
If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors.
In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).
This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.
For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.
Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.
See the following articles on msdn for details.
Double
http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
Decimal
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Seems like most of the arguments here to "It does not do what I want" are "but it's faster", well so is ANSI C+Gmp library, but nobody is advocating that right?
If you particularly want to control accuracy, then there are other languages which have taken the time to implement exact precision, in a user controllable way:
http://www.doughellmann.com/PyMOTW/decimal/
If precision is really important to you, then you are probably better off using languages that mathematicians would use. If you do not like Fortran then Python is a modern alternative.
Whatever language you are working in, remember the golden rule:
Avoid mixing types...
So do convert a and b to be the same before you attempt a operator b
If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).
Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.
If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.
The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.
double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.
[piece to users of MSDN slowdown compilers]
Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.
Decimal is actually a complex structure, consisting of several integers.

Categories

Resources