ASN.1 object identifier values greater than 128 - c#

So I understand the way the values are encoded when their values are less than 127. However, after reading https://learn.microsoft.com/en-us/windows/desktop/seccertenroll/about-object-identifier, i still don't understand how values greater than 128 are encoded. For example:
1.3.6.1.4.1.311.21.20
gets encoded into:
2b 06 01 04 01 82 37 15 14
How is 311 encoded into 82 37? When you convert 8237 to decimal, you get 33335. I don't really understand this part exactly.

This article should help you understand the encoding.
7-bit encoding is used and 8th bit (MSB) used to indicate end of encoding.
82 37 is in binary 10000010 00110111. You can see that it is composed of 2 parts. The first part has MSB set to 1 but the second (also the last in this case) has the MSB set to 0 indicating end of encoding. If you decoded that (ignore MSB from first part) it would be 0000 0010 = 256 (2*128) + 0011 0111 = 55 (2^0 + 2^1 + 2^2 + 2^4 + 2^5) = 311

Related

Creating Bit Array in Powershell/C# from integers

I'm trying to reverse engineer a game database and have come to a roadblock.
I can load all the tables/fields/records , however I'm stuck when it comes to converting the record values to hex or bits
the values (in game) are as follows: (15 bits) 192 - (10 bits) 20 - (5 bits) 19 - (5 bits) 2
In the db file , it shows 00 C0 - 00 0A - A6 - 00
This is strange , because only the first value (00 C0) is the same in Hex (192)
The other values are different , I'm guessing this is because they are not full bytes (10 and 5 bits respectively) so it must be using a bit array.
This guess is further proven when I change the final value from 2 , to 31. The last 2 values in the db are changed, and the hex string becomes 00 C0 - 00 0A - E6 - 07
So what's the best way to get these 4 integers in to a bit array in PowerShell so I can try to determine what's going on here ? If it is not obvious to any more experienced programmers what is at play here. If required I could also use C# however I'm less experienced.
Thanks
I am not sure what you want to achieve. 5bits words are literally odd.
It could be that there is no clear conversion here but something like a hash. Anyways, you could technically count from 0 to 2^35 - 1 and poke that in your game and lookup the result in your database.
Let me give you a few conversion methods:
To bit array:
$Bits =
[convert]::ToString(192, 2).PadLeft(15, '0') +
[convert]::ToString( 20, 2).PadLeft(10, '0') +
[convert]::ToString( 19, 2).PadLeft( 5, '0') +
[convert]::ToString( 2, 2).PadLeft( 5, '0')
$Bits
00000001100000000000101001001100010
And back:
if ($Bits -Match '(.{15})(.{10})(.{5})(.{5})') {
$Matches[1..4].Foreach{ [convert]::ToByte($_, 2) }
}
192
20
19
2
To Int64:
$Int64 = [convert]::ToInt64($Bits, 2)
$Int64
201347682
To bytes:
$Bytes = [BitConverter]::GetBytes($Int64)
[System.BitConverter]::ToString($Bytes)
62-52-00-0C-00-00-00-00
Note that the bytes list is reverse order:
[convert]::ToString(0x62, 2)
1100010

How to determine UUDecoding method needed?

I'm communicating to a device that returns uuencoded data:
ASCII: EZQAEgETAhMQIBwIAUkAAABj
HEX: 45-5A-51-41-45-67-45-54-41-68-4D-51-49-42-77-49-41-55-6B-41-41-41-42-6A
The documentation for this device states the above is uuencoded but I can't figure out how to decode it. The final result won't be a human readable string but the first byte reveals the number of bytes for the following product data. (Which would be 23 or 24?)
I've tried using Crypt2 to decode it; it doesn't seem to match 644, 666, 744 modes.
I've tried to hand write it out following the Wiki: https://en.wikipedia.org/wiki/Uuencoding#Formatting_mechanism
Doesn't make sense! How do I decode this uuencoded data?
I agree with #canton7 that it looks like it's base64 encoded. You can decode it like this
byte[] decoded = Convert.FromBase64String("EZQAEgETAhMQIBwIAUkAAABj");
and if you want, you can print the hex values like this
Console.WriteLine(BitConverter.ToString(decoded));
which prints
11-94-00-12-01-13-02-13-10-20-1C-08-01-49-00-00-00-63
As #HansKilian says in the comments, this is not uuencoded.
If you base64-decode it you get (in hex):
11 94 00 12 01 13 02 13 10 20 1c 08 01 49 00 00 00 63
The first number, 17 in decimal, is the same as the number of bytes following it, which matches:
The final result won't be a human readable string but the first byte reveals the number of bytes for the following product data.
(#HansKilian made the original call that it was base64-encoded. This answer provides confirmation of that by looking at the first decoded byte, but please accept his answer)

How to create a byte array that will have a BCC (Block Check Character) of a given value (0)?

I am trying to create a byte array that has a certain BCC (Block Check Character) result (=0).
The array will have the preamble of:
32 02 31 1F 31 1E 32 1F T E S T :
32 02 31 1F 31 1E 32 1F 54 45 53 54 3A 20
With a variable text message (msg) in the middle:
T e s t 2
54 65 73 74 32
Followed by a postamble of:
1E 37 1F 33 03
The BCC I get for this string is: 0x11
Here's the algorithm that returns this value (C++):
unsigned char bcc=0;
int index = block.Find(0x03); //ETX
for (int i=0; i<= index;i++)
bcc ^= block[i];
return bcc;
I'm trying to come up with a method of finding a middle message section that will result in the BCC of 0.
I am currently using trial and error, but I'm pretty sure there's a better way to do this - I just haven't come up with one that works. I've taken a swing at a tool that replicates the BCC method in use above (in C#) and that disagrees with the results I get (sigh).
You can set the checksum to zero by replacing any single character with the character xor ed with the current checksum.
For example, change test2 to test#
0x32(#) = 0x23(2)^0x11
You may need to take care to avoid certain special characters (it looks like 0x03 is significant in some way, and often 0x00 should also be avoided in case strings are using null termination). For example, if you wanted to make a character into 0x03 you might instead prefer to add two characters whose xor is 0x03, such as 'a' and 'b'

Decode a string 'BHQsZMaQQok='

I'm trying to decode a string like 'BHQsZMaQQok='.
All I know about the string is that it must be a number. I can find more encrypted string if it is necessary.
It's a base64 string, 8 bytes: 04 74 2C 64 C6 90 42 89
Interpreted as an IEEE754 double, it is 3.3121005957308838680392659232E-287
As a big-endian long: 320930284789842569
As a little-endian long: -8556117160291961852
One can only guess how to interpret it...

Which encoding to use for reading a string from a file?

I'm parsing a file (which I don't generate) that contains a string. The string is always preceded by 2 bytes which tell me the length of the string that follows.
For example:
05 00 53 70 6F 72 74
would be:
Sport
Using a C# BinaryReader, I read the string using:
string s = new string(binaryReader.ReadChars(size));
Sometimes there's the odd funky character which seems to push the position of the stream on further than it should. For example:
0D 00 63 6F 6F 6B 20 E2 80 94 20 62 6F 6F 6B
Should be:
cook - book
and although it reads fine the stream ends up two bytes further along than it should?! (Which then messes up the rest of the parsing.)
I'm guessing it has something to do with the 0xE2 in the middle, but I'm not really sure why or how to deal with it.
Any suggestions greatly appreciated!
My guess is that the string is encoded in UTF-8. The 3-byte sequence E2 80 94 corresponds to the single Unicode character U+2014 (EM DASH).
In your first example
05 00 53 70 6F 72 74
none of the bytes are over 0x7F and that happens to be the limit for 7 bit ASCII. UTF-8 retains compability with ASCII by using the 8th bit to indicate that there will be more information to come.
0D 00 63 6F 6F 6B 20 E2 80 94 20 62 6F 6F 6B
Just as Ted noticed your "problems" starts with 0xE2 because that is not a 7 bit ASCII character.
The first byte 0x0D tells us there should be 11 characters but there are 13 bytes.
0xE2 tells us that we've found the beginning of a UTF-8 sequence since the most significant bit is set (it's over 127). In this case a sequence that represents — (EM Dash).
As you did correctly state the E2 character is the problem. BinaryReader.ReadChars(n) does not read n-bytes but n UTF-8 encoded Unicode characters. See Wikipedia for Unicode Encodings. The term you are after are Surrogate Characters. In UTF-8 characters in the range of 000080 – 00009F are represented by two bytes. This is the reason for your offset mismatch.
You need to use BinaryReader.ReadBytes to fix the offset issue and the pass it to an Encoding instance.
To make it work you need to read the bytes with BinaryReader and then decode it with the correct encoding. Assuming you are dealing with UTF-8 then you need to pass the byte array to
Encoding.UTF8.GetString(byte [] rawData)
to get your correctly encoded string back.
Yours,
Alois Kraus

Categories

Resources