HMAC Licensing Example Does Not Make Sense - c#

I am researching licensing solutions for a project of mine, one article has the following text:
"The expiration date is represented as days (not seconds) since 1/1/1970. This way it only takes two bytes to represent the date" - [http://www.drdobbs.com/licensing-using-symmetric-and-asymmetric/184401687?pgno=1][1] (under the heading "HMAC Licensing System" about half way down)
How can this be correct if the days returned are a 32-bit integer, how can this guy fit that info into 2 bytes?

You can simply truncate a 32 bit integer to 16 bits. An unsigned 16 bit integer has a maximum of 65535, which, if expressing a number of days, is over 179 years.

Related

Why does C# System.Decimal (decimal) "waste" bits?

As written in the official docs the 128 bits of System.Decimal are filled like this:
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain
the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and
sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which
indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign: 0 mean positive, and 1 means negative.
With that in mind one can see that some bits are "wasted" or unused.
Why not for example 120 bits of integer, 7 bits of exponent and 1 bit of sign.
Probably there is a good reason for a decimal being the way it is. This question would like to know the reasoning behind that decision.
Based on Kevin Gosse's comment
For what it's worth, the decimal type seems to predate .net. The .net
framework CLR delegates the computations to the oleaut32 lib, and I
could find traces of the DECIMAL type as far back as Windows 95
I searched further and found a likely user of the DECIMAL code in oleauth32 Windows 95.
The old Visual Basic (non .NET based) and VBA have a sort-of-dynamic type called 'Variant'. In there (and only in there) you could save something nearly identical to our current System.Decimal.
Variant is always 128 bits with the first 16 bits reserved for an enum value of which data type is inside the Variant.
The separation of the remaining 112 bits could be based on common CPU architectures in the early 90'ies or ease of use for the Windows programmer. It sounds sensible to not pack exponent and sign in one byte just to have one more byte available for the integer.
When .NET was built the existing (low level) code for this type and it's operations was reused for System.Decimal.
Nothing of this is 100% verified and I would have liked the answer to contain more historical evidence but that's what I could puzzle together.
Here is the C# source of Decimal. Note the FCallAddSub style methods. These calls out to (unavailable) fast C++ implementations of these methods.
I suspect the implementation is like this because it means that operations on the 'numbers' in the first 96 bits can be simple and fast, as CPUs operate on 32-bit words. If 120 bits were used, CPU operations would be slower and trickier and require a lot of bitmasks to get the interesting extra 24 bits, which would then be difficult to work with. Additionally, this would then 'pollute' the highest 32-bit flags, and make certain optimizations impossible.
If you look at the code, you can see that this simple bit layout is useful everywhere. It is no doubt especially useful in the underlying C++ (and probably assembler).

Read Fortran binary file into C# without knowledge of Fortran source code?

Part one of my question is even if this is possible? I will briefly describe my situation first.
My work has a licence for a software that performs a very specific task, however most of our time is spent exporting data from the results into excel etc to perform further analysis. I was wondering if it was possible to dump all of the data into a C# object so that I can then write my own analysis code, which would save us a lot of time.
The software we licence was written in Fortran, but we have no access to the source code. The file looks like it is written out in binary, however I do not know if it is unformatted / sequential etc (is there anyway to discern this?).
I have used some of the other answers on this site to successfully read in the data to a byte[], however this is as far as I have got. I have tried to change portions to doubles (which I assume most of the data is) but the numbers do not strike me as being meaningful (most appear too large or too small).
I have the documentation for the software and I can see that most of the internal variable names are 8 character strings, would this be saved with the data? If not I think it would be almost impossible to match all the data to its corresponding variable. I imagine most of the data will be double arrays of the same length (the number of time points), however there will also be some arrays with a longer length as some data would have been interpolated where shorter time steps were needed for convergence.
Any tips or hints would be appreciated, or even if someone tells me its just not possible so I don't waste any more time trying to solve this.
Thank you.
If it was formatted, you should be able to read it with a text editor: The numbers are written in plain text.
So yes, it's probably unformatted.
There are different methods still. The file can have a fixed record length, or it might have a variable one.
But it seems to me that the first 4 bytes represent an integer containing the length of that record in bytes. For example, here I've written the numbers 1 to 10, and then 11 to 30 into an unformatted file, and the file looks like this:
40 1 2 3 4 5 6 7 8 9 10 40
80 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 80
(I added the new line) In here, the first 4 bytes represent the number 40, followed by 10 4-byte blocks representing the numbers 1-10, followed by another 40. The next record starts with an 80, and 20 4-byte blocks containing the numbers 11 through 30, followed by another 80.
So that might be a pattern you could try to see. Read the first 4 bytes and convert them to integer, then read that many bytes and convert them to whatever you think it should be (4 byte float, 8 byte float, et cetera), and then check whether the next 4 bytes again represents the number that you read first.
But there other methods to write data in Fortran that doesn't seem to have this behaviour, for example direct access and stream. So no guarantees.

Why do these two hex numbers come up at the same integer on online calcs?

This is really annoying, I have two hex numbers I am 90% sure that one of them is exactly 2 increment higher. However when I type them into an online hex to decimal calculator they come out the same. How can this be?
lower number at
0x00010471000001BF001F = 18766781122258862000
higher number at
0x00010471000001BF0021 = 18766781122258862000
? What is going on ?
The calc I used is...
http://www.rapidtables.com/convert/number/hex-to-decimal.htm
The higher number is 2 higher instead of 1. 0x00010471000001BF0020 is in between. I think your problem is related to an overflow issue because the numbers are very large. Probably the calculators you are using are converting the values to floating point which looses accuracy.
The values you are posting need at least 9 bytes to represent (or at least 65 bits)
First basic knowledge of Hex should tell you that 20 is between 1F and 21, so the highest number is the lower number + 2.
Second, if you use an unknown tool, you have to be sure it's reliable. Your tool obviously can't handle such large numbers.
Wolfram Alpha gives you the correct answers :
http://www.wolframalpha.com/input/?i=0x00010471000001BF001F+in+decimal
http://www.wolframalpha.com/input/?i=0x00010471000001BF0021+in+decimal
First things first, why did you classify this question under the C# tag?
The problem is most like caused by the value being too big and the converter doesn't work well with big numbers.
Just because this is tagged with C#.
Add a reference to .NET component System.Numerics.
To convert from large hex to integer use BigInteger.
System.Numerics.BigInteger a;
System.Numerics.BigInteger.TryParse("00010471000001BF001F", System.Globalization.NumberStyles.HexNumber,null,out a);
Console.WriteLine(a.ToString());
System.Numerics.BigInteger.TryParse("00010471000001BF0021", System.Globalization.NumberStyles.HexNumber, null, out a);
Console.WriteLine(a.ToString());
output
18766781122258862111
18766781122258862113

Most efficient way to store a 40 cards deck

I'm building a simulator for a 40 card's deck game. The deck is divided into 4 seeds, each one with 10 cards. Since there's only 1 seed that's different from the others ( let's say, hearts ) , I've thinked of a quite convinient way to store a set of 4 cards with the same value in 3 bits: the first two indicate how many cards of a given value are left, and the last one is a marker that tells if the heart card of that value is still in the deck.
So,
{7h 7c 7s} = 101
That allows me to store the whole deck on 30 bits of memory instead of 40. Now, when i was programming in C, I'd have allocated 4 chars ( 1 byte each = 32 bits), and played with the values with bit operations.
In C# I can't do that, since chars are 2 bytes each and playing with bits is much more of a pain, so, the question is : what's the smallest amount of memory I'll have to use to store the data required?
PS: Keep in mind that i may have to allocate 100k+ of those decks in system's memory, so saving 10 bits is quite a lot
in C, I'd have allocated 3 chars ( 1 byte each = 32 bits)
3 bytes gives you 24 bits, not 32... you need 4 bytes to get 32 bits. (Okay, some platforms have non-8-bit bytes, but they're pretty rare these days.)
In C# I can't do that, since chars are 2 bytes each
Yes, so you use byte instead of char. You shouldn't be using char for non-textual information.
and playing with bits is much more of a pain
In what way?
But if you need to store 30 bits, just use an int or a uint. Or, better, create your own custom value type which backs the data with an int, but exposes appropriate properties and constructors to make it better to work with.
PS: Keep in mind that i may have to allocate 100k+ of those decks in system's memory, so saving 10 bits is quite a lot
Is it a significant amount though? If it turned out you needed to store 8 bytes per deck instead of 4 bytes, that means 800M instead of 400M for 100,000 of them. Still less than a gig of memory. That's not that much...
In C#, unlike in C/C++, the concept of a byte is not overloaded with the concept of a character.
Check out the byte datatype, in particular a byte[], which many of the APIs in the .Net Framework have special support for.
C# (and modern versions of C) have a type that's exactly 8 bits: byte (or uint8_t in C), so you should use that. C char usually is 8 bits, but it's not guaranteed and so you shouldn't rely on that.
In C#, you should use char and string only when dealing with actual characters and strings of characters, don't treat them as numbers.

Store a 64 bits integer in a Jet engine (Access) database?

How would it be the best / most effective / less memory consuming way to store a 64 bits integer into a Jet Engine database? I'm pretty sure their integers are 32 bits.
The largest integer MSAccess supports is a NUMBER (FieldSize= LONG INTEGER) type
but this is not 64 bits.
http://msdn.microsoft.com/en-us/library/ms714540(v=vs.85).aspx
To store numbers as large as 64 bits you will need to use the DOUBLE or DECIMAL type, but will not have "integer precision" with DOUBLE and you have overhead with DECIMAL
Alternatively you could use a CURRENCY type and disregard decimals.
http://www.w3schools.com/sql/sql_datatypes.asp
For more details on the nuances of all data types you can look here:
http://office.microsoft.com/en-us/access-help/introduction-to-data-types-and-field-properties-HA010233292.aspx
EDIT: Though you will have a limited number of significant digits in DOUBLE as pointed out by #ho1 in the comments below.
You can make CURRENCY work by inferring the digits in code if you are pressed for disk storage space but your best bet is probably DECIMAL

Categories

Resources