I'm working on serial port communication and I have some info in a bat file which is encoded. I need to extract the file size which I translated to hex but it's flipped(something to do with memory) and i need to get the correct size.
Here is the hex I have in my bat file(converted to decimal it's : 1178534144)
So I'm having alot of problems converting it...
and here is the hex number I need to get(int decimal it's 81734)
**EDIT
Here's 64 bytes out of the bat file which I converted to hex cause in ASCII it's unreadable. Focus on the part marked with red(whole hex) and part in blue(it's the hex number I need to convert from 46 3f 01 00 to 0013f46
Use Convert.ToInt32-Methode: (String, Int32) with the base as parameter
The base of the number in value, which must be 2, 8, 10, or 16.
So the code would be (16 for base 16 aka hex)
int result = Convert.ToInt32("463F0100", 16); // 1178534144
The decimal number 1178534144 is 0x463F0100. To get decimal 81734 you need to rotate 4 bytes to get 0x00013F46.
Under Windows you can include winsock.h and use function ntohl.
https://learn.microsoft.com/en-us/windows/desktop/api/winsock/nf-winsock-ntohl
Related
So the issue is that when using c# the char is 4 bytes so "abc" is (65 0 66 0 67 0).
When inputing that to a wstring in c++ thru sending it in a socket i get the following output a.
How i am able to convert such a string to a c++ string?
Sounds like you need ASCII or UTF-8 encoding instead of Unicode.
65 0 66 0 67 0 is only going to get you the A, since the next zero is interpreted as a null termination character in C++.
Strategies for converting Unicode to ASCII can be found here.
using c# the char is 4 bytes
No, in CSharp Strings are encoded in UTF16. Code units need at least two bytes in UTF16. For simple charachters a single code unit can represent a code point (e.g. 65 0).
On Windows wstring is usually UTF16 (2-4 Bytes) encoded, too. But on Unix/Linux wstring uses usually UTF32-Encoding (always 4 Bytes).
The Unicode code Point has the same numerical value compared to ASCII - therefore UTF-16 encoded ASCII text looks often like this: {num} 0 {num} 0 {num} 0...
See the details here: (https://en.wikipedia.org/wiki/UTF-16)
Could you show us some Code, how you constructed your wstring object?
The null byte is critical here, because it was the end marker for ASCII / ANSI Strings.
I have been able to solve the issue by using a std::u16string.
Here is some example code
std::vector<char> data = { 65, 0, 66, 0, 67, 0 };
std::u16string string(&data[0], data.size() / 2);
// now string should be encoded right
I am trying to convert a AS3 (ActionScript 3) function to C#.
This ActionScript function contains a class called ByteArray which from what I am aware of it's basically what it sounds like lmao. I think it's kind of similar of how byte[] would be in C#. Anyway, I have tried my best to convert the code to C# using MemoryStream and then writing bytes to it, and then returning UTF8 string as you can see in my code below. However, I feel as if my way of doing how the ActionScript code does isn't accurate and that is where my question above comes in.
With them negative numbers being written into "loc1" (The ByteArray) and "loc1.uncompress()", that's where I feel like I am failing and was wondering if someone could help me out in converting this function so it's fully accurate?
On top of that question, I would also like to ask if what I was doing with the negative numbers was correct in my C# code just like how the ActionScript code was doing it? Would mean a lot (:
(Sorry if not fully understandable and if what I say doesn't match up as much)
ActionScript Code:
private function p() : String
{
var _loc1_:ByteArray = new ByteArray();
_loc1_.writeByte(120);
_loc1_.writeByte(-38);
_loc1_.writeByte(99);
_loc1_.writeByte(16);
_loc1_.writeByte(12);
_loc1_.writeByte(51);
_loc1_.writeByte(41);
_loc1_.writeByte(-118);
_loc1_.writeByte(12);
_loc1_.writeByte(50);
_loc1_.writeByte(81);
_loc1_.writeByte(73);
_loc1_.writeByte(49);
_loc1_.writeByte(-56);
_loc1_.writeByte(13);
_loc1_.writeByte(48);
_loc1_.writeByte(54);
_loc1_.writeByte(54);
_loc1_.writeByte(14);
_loc1_.writeByte(48);
_loc1_.writeByte(46);
_loc1_.writeByte(2);
_loc1_.writeByte(0);
_loc1_.writeByte(45);
_loc1_.writeByte(-30);
_loc1_.writeByte(4);
_loc1_.writeByte(-16);
_loc1_.uncompress();
_loc1_.position = 0;
return _loc1_.readUTF();
}
My C# Code:
public string p()
{
MemoryStream loc1 = new MemoryStream();
loc1.WriteByte((byte)120);
loc1.WriteByte((byte)~-38);
loc1.WriteByte((byte)99);
loc1.WriteByte((byte)16);
loc1.WriteByte((byte)12);
loc1.WriteByte((byte)51);
loc1.WriteByte((byte)41);
loc1.WriteByte((byte)~-118);
loc1.WriteByte((byte)12);
loc1.WriteByte((byte)50);
loc1.WriteByte((byte)81);
loc1.WriteByte((byte)73);
loc1.WriteByte((byte)49);
loc1.WriteByte((byte)~-56);
loc1.WriteByte((byte)13);
loc1.WriteByte((byte)48);
loc1.WriteByte((byte)54);
loc1.WriteByte((byte)54);
loc1.WriteByte((byte)14);
loc1.WriteByte((byte)48);
loc1.WriteByte((byte)46);
loc1.WriteByte((byte)2);
loc1.WriteByte((byte)0);
loc1.WriteByte((byte)45);
loc1.WriteByte((byte)~-30);
loc1.WriteByte((byte)4);
loc1.WriteByte((byte)~-16);
loc1.Position = 0;
return Encoding.UTF8.GetString(loc1.ToArray());
}
1) In C#, bytes are unsigned. You cannot convert a signed byte to an unsigned byte with the complement, because your intention is that the bitwise representation should be identical, rather than opposite, which is what the complement does.
one simple way to convert is to mask with 0xFF: -37 & 0xFF = 219. There are other, mathematically equivalent ways, such as checking for negatives with sbyte sb = -37; byte b = sb < 0 ? 256 + sb : sb;
2) The builtin System.IO.Compression namespace is lacking in a number of ways. For one, it doesn't even support decompressing zlib data, which is what your byte array holds. the best way is to use a third party package on Nuget instead. The DotNetZip library does what you need, specifically the Ionic.Zlib.ZlibStream.UncompressBuffer function.
(1)
#Jimmy has given you a good Answer.
This is what he meant when he told you "to mask with 0xFF" so that your -38 becomes masked as:
loc1.WriteByte( (byte)(-38 & 0xFF) );
Do the same above logic for any other values that have a minus sign.
(2)
It might be easier if you just use values written in hex instead of decimal. This means instead of decimal 255 you write equivalent hex of 0xFF since bytes are supposed to be in hex. The WriteByte is auto-converting your decimals but it's not helping you to learn what it is going on...
For example your beginning two byte values are 120 -38 but in hex that is 0x78 0xDA.
Now if you google search bytes 0x78 0xDA you will find out those two bytes are header for ZLIB's DEFLATE compression algorithm.
This ZLIB detail is important to know for the next step...
(3)
Sometimes the variable names are not always recovered during de-compiling. This is why all your code has these silly _loc_ as generic names (real var names are unknown, only their data type).
Your _loc1_.uncompress(); is supposed to contain a String variable specifying the algorithm.
public function uncompress(algorithm:String) :void //from AS3 documentation
During decompilation that important info was lost. Luckily there only 3 options "ZLIB", "DEFLATE" or "LZMA". From the above notice (2) we can see it should be _loc1_.uncompress("DEFLATE");
Solution:
Create a byte array (not Memory Stream) and manually fill with hex values (eg: -13 is written 0xDA).
First convert each of your numbers to hex. You can use Windows Calculator in Programmer mode (under View option), where you type a decimal in dec mode then press hex to see same value as hex format. Maybe some online tool can do it too.
The final hex values should look like 78 DA 63 10 0C 33 29 8A 0C 32 51 49 31 C8 .... etc until the ending hex value F0 which equals your ending decimal -16.
Then you can easily do...
public string p()
{
byte[] loc_Data = new byte[] {
0x78, 0xDA, 0x63, 0x10, 0x0C, 0x33, 0x29, 0x8A,
0x0C, 0x32, 0x51, 0x49, 0x31, 0xC8, 0x0D, 0x30, etc etc ... until 0xF0
};
var loc_Uncompressed = Ionic.Zlib.ZlibStream.UncompressBuffer( loc_Data );
return Encoding.UTF8.GetString( loc_Uncompressed ); //or try: loc_Uncompressed.ToArray()
}
Is it possible to Encode a string in a certain way to minimize the number of bytes? basically i need to get 29 characters down to 11 bytes of data.
var myString = "usmiaanzaklaacn40879005900133";
byte[] bytes = Encoding.UTF8.GetBytes(myString);
Console.WriteLine(bytes.Length); //Output = 29, 1 byte per character
Console.ReadKey();
This shows when encoding with UTF8 that 29 character string results in 29 Bytes... i need 29 character string resulting in 11 bytes or less.. is this possible? I was thinking i could possible have some sort of lookup or binary mapping algorithmn but i am a little unsure on how to go about this in C#.
EDIT:
So i have a Chip that has a custom data payload of 11 bytes. I want to be able to compress a 29 character string (that is unique) into bytes, assign it to the "custom data" and then receive the custom data bytes and decompress it back to the 29 character string... now i dont know if this is possible, but any help would be greatly appreciated.. thanks :)
the string itself [usmia]-[anzakl]-[aacn40879005900]-[133] = [origin]-[dest]-[random/unique]-[weight]
Ok the last 14 characters are integers.
I have access to all the Origins and Destination... would it be feesable to create a key value store have the key as the "Origin e.g. usmia" and the value is a particular byte.. i guess that would mean i could only have like 256 different Origin and Dests and then just make the the last 14 characters an integer??
15 lg(26) + 14 lg(10) ~= 117 bits ~= 14.6 bytes. (lg = log base 2)
So even I was optimistic and assumed that your strings were always 15 lower case letters followed by 14 digits, it would still take a minimum of 15 bytes to represent.
Unless there are more restrictions, like only the lower case letters a, c, i, k, l, m, n, s, u, and z are allowed, then no, you can't code that into 11 bytes. Whoops, wait, not even then. Even that would take a little over 12 bytes.
I got a method that's supposed to generate a 64 byte (512 bit) salt for me:
public static string GenerateSalt()
{
var rngCrypto = new RNGCryptoServiceProvider();
byte[] saltBytes = new byte[64];
rngCrypto.GetBytes(saltBytes);
string result = Convert.ToBase64String(saltBytes);
return result;
}
This seems to be running fine, the saltBytes bytearray has the size of 64 byte. However, I can't enter the results in my MS SQL Database Table, consisting of a char(64) typed column.
My assumption is, that the Convert.ToBase64String(saltBytes); method is faulty on my side, but I'd like to know how I can improve this. A quick run through System.Text.ASCIIEncoding.Unicode.GetByteCount(secondSalt); reveals a string size of 176 byte instead of 64 byte.
A byte array is logically a number in base 256. Converting that to a number in base 64 is going to make it longer. Just like when you convert from hex F0 to binary 1111111100000000, it gets longer.
If you want to store the salt in the database in a human-readable base-64-encoded string then it is going to have to be much longer than 64 single-byte characters.
As for running it through the ASCII encoder -- I have no idea what you're trying to do here. That sounds like an odd thing to do to non-textual data. Can you explain?
When you start with 64 bytes (512 bits), then convert to base 64, you're storing only 6 bits in each byte, so you need ceiling(512/6) = 86 bytes to store the result (not sure where your 176 bytes is coming from though).
I'm trying to understand the following:
If I am declaring 64 bytes as the array length (buffer). When I convert to a base 64 string, it says the length is 88. Shouldn't the length only be 64, since I am passing in 64 bytes? I could be totally misunderstanding how this actual works. If so, could you please explain.
//Generate a cryptographic random number
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
// Create byte array
byte[] buffer = new byte[64];
// Get random bytes
rng.GetBytes(buffer);
// This line gives me 88 as a result.
// Shouldn't it give me 64 as declared above?
throw new Exception(Convert.ToBase64String(buffer).Length.ToString());
// Return a Base64 string representation of the random number
return Convert.ToBase64String(buffer);
No, base-64 encoding uses a whole byte to represent six bits of the data being encoded. The lost two bits is the price of using only alphanumeric, plus and slash as your symbols (basically, excluding the numbers representing not visible or special characters in plain ASCII/UTF-8 encoding). The result that you are getting is (64*4/3) rounded up to the nearest 4-byte boundary.
Base64 encoding converts 3 octets into 4 encoded characters; therefore
(64/3)*4 ≈ (22*4) = 88 bytes.
Read here.
Shouldn't the length only be 64, since I am passing in 64 bytes?
No. You are passing 64 tokens in Base256 notation. Base64 has less information per token, so it needs more tokens. 88 sounds about right.