I have a hex value of 0x1047F71 and I want to put in byte array of 4 bytes. Is this the right way to do it:
byte[] sync_welcome_sent = new byte[4] { 0x10, 0x47, 0xF7, 0x01 };
or
byte[] sync_welcome_sent = new byte[4] { 0x01, 0x04, 0x7F, 0x71 };
I would appreciate any help.
If you want to be compatible with Intel little-endian, the answer is "None of the above", because the answer would be "71h, 7fh, 04h, 01h".
For big-endian, the second answer above is correct: "01h, 04h, 7fh, 71h".
You can get the bytes with the following code:
uint test = 0x1047F71;
var bytes = BitConverter.GetBytes(test);
If you want big-endian, you can just reverse the bytes using Linq like so:
var bytes = BitConverter.GetBytes(test).Reverse();
However, if you are running the code on a Big Endian system, reversing the bytes will not be necessary, since BitConverter.GetBytes()will return them as big endian on a big endian system.
Therefore you should write the code as follows:
uint test = 0x1047F71;
var bytes = BitConverter.GetBytes(test);
if (BitConverter.IsLittleEndian)
bytes = bytes.Reverse().ToArray();
// now bytes[] are big-endian no matter what system the code is running on.
Related
My goal is to get a 64bit value hence a byte array of size 8. However my problem is that I want to set the first 20 bits myself and then have the rest to be 0s. Can this be done with the shorthand byte array initialisation?
E.g. if I wanted all 0s I would say:
byte[] test = new byte[] {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
What I've thought about/tried:
So each hexadecimal digit corresponds to 4 binary digits. Hence, if I want to specify the first 20bits, then I specify the first 5 hexadecimal digits? But I'm not sure of how to do this:
byte[] test = new byte[] {0xAF, 0x17, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00};
That would mean that I've specified the first 24 bits right? And not 20.
I could use a BitArray and do it that way but I'm just wondering whether it can be done in the above way.
How about:
byte byte1 = 0xFF;
byte byte2 = 0xFF;
byte byte3 = 0xFF;
// 8bits 8bits 4bits : total = 20 bits
// 11111111 11111111 11110000
byte[] test = new byte[] { byte1, byte2, (byte)(byte3 & 0xF0), 0x00, 0x00, 0x00, 0x00, 0x00 };
You can write your bytes backward, and use BitConverter.GetBytes(long):
var bytes = BitConverter.GetBytes(0x117AF);
Demo.
Since each hex digit corresponds to a single four-bit nibble, you can initialize data in "increments" of four bits. However, the data written in reverse will be almost certainly less clear to human readers of your code.
I want define a byte array like "\x90<>",
I can only define it use numbers
byte[] a = new byte[] { 0x90, 0x3c, 0x3e };
but it's not readable and cost more time to write,
Can I do something like this in .net?
byte[] a = '\x90<>';
Edited.
I'm ask this because in c and c++ I can actually define a byte array by
char myvar[] = "\x90<>"; or char *myvar= "\x90<>";
they are equals new byte[] { 0x90, 0x3c, 0x3e} on c#.
A string is an array of chars, where a char is not a byte in the .Net.
You can't define bytes with a string and you can't do this in easy way, you must implement your own convertor methods and use it with that.
Check this out:
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
static string GetString(byte[] bytes)
{
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
return new string(chars);
}
Try converting it to a char array in the declaration. The escaped characters will be correctly processed into a single char and the unescaped characters will be split into individual characters.
Then you can cast each element in the array as a byte. You could use LINQ to make it easy:
var bytes = "\x90<>".ToCharArray().Select(b => (byte)b);
foreach(var myByte in bytes){
Console.WriteLine(string.Format("0x{0:x2}", myByte));
}
I believe this gives you the correct output of 0x90, 0x3c, 0x3e
Edit
I'm sure there will be some issue with encoding here, but none was specified in the question.
Extra Edit
The code above will give you an IEnumerable<byte>. To get the actual byte[] you want, just call
bytes.ToArray();
Convert to byte array as this.
string source = "\x90<>";
byte[] bytes = Encoding.Default.GetBytes(source);
How can I write a byte array to a file but then break when the byte that is about to be written is equal to 0x00 or null?
I've tried to turn the bytes into a string and trim but that does not work.
message = new byte[4096];
clientStream.Read(message, 0, 4096);
clientStream.Flush();
File.WriteAllText(AUTH_KEY"/email.txt",encoder.GetString(message).Trim());
Find the first index of 0x00 and then block write everything before it.
You can copy everything before the first 0x00 into a new array and then call
File.WriteAllBytes( string path, byte[] bytes)
Use CCCrypt() method to encrypt, and then decrypt it in C#. But the output is not same as the original plain text.
The key is 256 bits long, and the IV is the default value.
The main codes is showing as below:
// Encrypt
{
// the key is 32 bytes (256 bits).
Byte iv[] = { 0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF, 0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF };
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, // Operation
kCCAlgorithmAES128, // Algorithm
kCCOptionPKCS7Padding, // Option
keyPtr, // key
kCCKeySizeAES256, // key length
iv, /* initialization vector (optional) */
[self bytes], // plain text
dataLength, /* input */
buffer,
bufferSize, /* output */
&numBytesEncrypted); //dataOutMove
if (cryptStatus == kCCSuccess) {
NSData *encryptedData = [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
NSString *encryptedString = [encryptedData base64Encoding];
}
// Decrypted
{
byte[] _key1 = { 0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF, 0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF };
public static string AESDecrypt(string encryptedString, string key)
{
AesCryptoServiceProvider aes = new AesCryptoServiceProvider();
aes.BlockSize = 128;
aes.KeySize = 256;
aes.IV = _key1;
aes.Key = Encoding.UTF8.GetBytes(key);
aes.Mode = CipherMode.CBC;
aes.Padding = PaddingMode.PKCS7;
// Convert Base64 strings to byte array
byte[] src = System.Convert.FromBase64String(encryptedString);
// decryption
using (ICryptoTransform decrypt = aes.CreateDecryptor())
{
byte[] dest = decrypt.TransformFinalBlock(src, 0, src.Length);
return Encoding.UTF8.GetString(dest);
}
}
}
EDIT:
I found the reason is the keyPtr. In the Encrypt process, I handle the key early like this:
char keyPtr[kCCKeySizeAES256+1]; // room for terminator (unused)
bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) [key
getCString:keyPtr maxLength:sizeof(keyPtr)
encoding:NSUTF8StringEncoding];
Now, I modify those codes like this:
NSData *keyData = [key dataUsingEncoding:NSUTF8StringEncoding];
NSUInteger keyLength = [keyData length];
Byte *keyPtr= (Byte *)malloc(keyLength);
memcpy(keyPtr, [keyData bytes], keyLength);
Then I got the correct output.
Although the problem is gone, I really do not know what's wrong about the previous version.
Crypto is designed to fail badly if there is even a small error. You need to check explicitly that the keys are byte for byte the same (check bytes, not characters). The same for the IVs. You are decoding in CBC mode. Are you certain that the encryption is in CBC mode; it isn't set explicitly in your code. The same with padding. Are you certain that the encryption method is using PKCS7?
In general don't rely on default settings, but set them explicitly in your code.
As a last point, are you using the same byte <-> character conversions on both sides. Again it is better to explicitly state what you are using. For example, UTF-8 text may come with an initial BOM that a UTF-8 conversion will ignore, but a different conversion will include in the bytes.
I had the same problem with some code I was reviewing - where data encrypted by CCCrypt, and decrypted by another tool - did not result in the same plaintext. I tracked it down to a failure to generate the Key properly. It's this line:
[key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding];
If the CString version of "key" is longer than kCCKeySizeAES256 bytes, then key:getCString will return "NO" (because it won't fit in the buffer provided) and will not touch the buffer pointed to by keyPtr, resulting in a key that is effectively all NUL bytes (from the bzero function).
Your replacement code, using [key dataUsingEncoding] does not suffer this problem, as it will create a NSData object as big as needed to store the string. However, it WILL suffer the problem of a key not big enough, unless you can guarantee that the incoming key is long enough.
Using plain text keys is not recommended. If you have a plain text "passphrase", then use a Key derivation function to turn that into a binary sequence. The "gold standard" is PBKDF2, but even doing a SHA2-256 is better than feeding in plain text. SHA2-256 will give you 32 bytes. (SHA-1 is insufficient, as it's only 20 bytes).
I am using BitConverter.ToInt32 to convert a Byte array into int.
I have only two bytes [0][26], but the function needs 4 bytes, so I have to add two 0 bytes to the front of the existing bytes.
What is the quickest method.
Thank you.
You should probably do (int)BitConverter.ToInt16(..) instead. ToInt16 is made to read two bytes into a short. Then you simply convert that to an int with the cast.
You should call `BitConverter.ToInt16, which only reads two bytes.
short is implicitly convertible to int.
Array.Copy. Here is some code:
byte[] arr = new byte[] { 0x12, 0x34 };
byte[] done = new byte[4];
Array.Copy(arr, 0, done, 2, 2); // http://msdn.microsoft.com/en-us/library/z50k9bft.aspx
int myInt = BitConverter.ToInt32(done); // 0x00000026
However, a call to `BitConverter.ToInt16(byte[]) seems like a better idea, then just save it to an int:
int myInt = BitConverter.ToInt16(...);
Keep in mind endianess however. On little endian machines, { 0x00 0x02 } is actually 512, not 2 (0x0002 is still 2, regardless of endianness).