I currently have a function [C#] which takes a byte[] and an alignment to set it to, but during encryption, an error is thrown every once in awhile.
private byte[] AlignByteArray(byte[] content, int alignto)
{
long thelength = content.Length - 1;
long remainder = 1;
while (remainder != 0)
{
thelength += 1;
remainder = thelength % alignto;
}
Array.Resize(ref content, (int)thelength);
return content;
}
Does anyone see any issues with the function? I'm getting errors that the content size is not valid during AES encryption, suggesting that it is not padding right.
Here's a simple solution:
private static void PadToMultipleOf(ref byte[] src, int pad)
{
int len = (src.Length + pad - 1) / pad * pad;
Array.Resize(ref src, len);
}
Are you sure it's 0x16 and not 16? (I thought it was 16 so I'm assuming that).
Edit: Any decent compiler should turn (x / 16) into (x >> 4).
int length = 16 * ((content.Length + 15) / 16);
Array.Resize(ref content, length);
Edit 2: For general purpose:
int length = alignment * ((content.Length + alignment - 1) / alignment);
Array.Resize(ref content, length);
Related
I am trying to convert these C piece of code into C#. But I've stumble in some parts that I don't understand exactly what is happening, so I am unable to translate it.
void foo(uint64_t *output, uint64_t *input, uint32_t Length){
uint64_t st[25];
memcpy(st, input, Length);
((uint8_t *)st)[Length] = 0x01;
memset(((uint8_t *)st) + Length + 1, 0x00, 128 - Length - 1);
for(int i = 16; i < 25; ++i) st[i] = 0x00UL;
// Last bit of padding
st[16] = 0x8000000000000000UL;
bar(st);
memcpy(output, st, 200);
}
More specifically in the ((uint8_t *)st)[Length] = 0x01; part. I cannot understand the cast/pointer in this line. Why is the * there for? If somebody could explain what is happening, I would be grateful.
What I got so far in C#:
private void foo(ref ulong[] output, ref ulong[] input, uint Length)
{
ulong[] st = new ulong[25];
//memcpy(st, input, Length);
Buffer.BlockCopy(input, 0, st, 0, (int)Length);
// Help in these line please:
//((uint8_t *)st)[Length] = 0x01;
// Still don't know what to do here too:
//memset(((byte)st) + Length + 1, 0x00, 128 - Length - 1);
for (int i = 16; i < 25; ++i)
{
st[i] = 0x00U;
}
// Last bit of padding
st[16] = 0x8000000000000000U;
bar(st);
//memcpy(output, st, 200);
Buffer.BlockCopy(st, 0, output, 0, 200);
}
Thank you.
What your function does is rather primitive memory manipulation:
void foo(uint64_t *output, uint64_t *input, uint32_t Length)
{
uint64_t st[25];
Your function takes a pointer to 16 elements of uint64_t and prepares a local buffer (st) to store a copy of them.
memcpy(st, input, Length);
((uint8_t *)st)[Length] = 0x01;
memset(((uint8_t *)st) + Length + 1, 0x00, 128 - Length - 1);
Make a copy of the input array into st but insert a byte with value 1 at an offset of Length bytes from the start.
for(int i = 16; i < 25; ++i) st[i] = 0x00UL;
The remaining 8 elements of the buffer are cleared to 0. Writing st[16] is pointless as it will be changed again by the next line...
// Last bit of padding
st[16] = 0x8000000000000000UL;
Now we have 16*8 bytes from the input buffer, 1 extra byte with value 1 inserted somewhere and the remaining elements are filled with 1<<63 and lots of 0 bytes.
bar(st);
memcpy(output, st, 200);
}
Do something we have no clue about and copy the resulting 25 elements into the output buffer
Could anyone help me optimize this piece of code? Its currently a large bottleneck as it gets called very often. Even a 25% speed improvement would be significant.
public int ReadInt(int length)
{
if (Position + length > Length)
throw new BitBufferException("Not enough bits remaining.");
int result = 0;
while (length > 0)
{
int off = Position & 7;
int count = 8 - off;
if (count > length)
count = length;
int mask = (1 << count) - 1;
int bits = (Data[Position >> 3] >> off);
result |= (bits & mask) << (length - count);
length -= count;
Position += count;
}
return result;
}
Best answer would go to fastest solution. Benchmarks done with dottrace. Currently this block of code takes up about 15% of the total cpu time. Lowest number wins best answer.
EDIT: Sample usage:
public class Auth : Packet
{
int Field0;
int ProtocolHash;
int Field1;
public override void Parse(buffer)
{
Field0 = buffer.ReadInt(9);
ProtocolHash = buffer.ReadInt(32);
Field1 = buffer.ReadInt(8);
}
}
Size of Data is variable but in most cases 512 bytes;
How about using pointers and unsafe context? You didn't say anything about your input data, method context, etc. so I tried to deduct all of these by myself.
public class BitTest
{
private int[] _data;
public BitTest(int[] data)
{
Length = data.Length * 4 * 8;
// +2, because we use byte* and long* later
// and don't want to read outside the array memory
_data = new int[data.Length + 2];
Array.Copy(data, _data, data.Length);
}
public int Position { get; private set; }
public int Length { get; private set; }
and ReadInt method. Hope comments give a little light on the solution:
public unsafe int ReadInt(int length)
{
if (Position + length > Length)
throw new ArgumentException("Not enough bits remaining.");
// method returns int, so getting more then 32 bits is pointless
if (length > 4 * 8)
throw new ArgumentException();
//
int bytePosition = Position / 8;
int bitPosition = Position % 8;
Position += length;
// get int* on array to start with
fixed (int* array = _data)
{
// change pointer to byte*
byte* bt = (byte*)array;
// skip already read bytes and change pointer type to long*
long* ptr = (long*)(bt + bytePosition);
// read value from current pointer position
long value = *ptr;
// take only necessary bits
value &= (1L << (length + bitPosition)) - 1;
value >>= bitPosition;
// cast value to int before returning
return (int)value;
}
}
}
I didn't test the method, but would bet it's much faster then your approach.
My simple test code:
var data = new[] { 1 | (1 << 8 + 1) | (1 << 16 + 2) | (1 << 24 + 3) };
var test = new BitTest(data);
var bytes = Enumerable.Range(0, 4)
.Select(x => test.ReadInt(8))
.ToArray();
bytes contains { 1, 2, 4, 8}, as expected.
I Don't know if this give you a significant improvements but it should give you some numbers.
Instead of creating new int variables inside the loop (this requires a time to create) let reserved those variables before entering the loop.
public int ReadInt(int length)
{
if (Position + length > Length)
throw new BitBufferException("Not enough bits remaining.");
int result = 0;
int off = 0;
int count = 0;
int mask = 0;
int bits = 0
while (length > 0)
{
off = Position & 7;
count = 8 - off;
if (count > length)
count = length;
mask = (1 << count) - 1;
bits = (Data[Position >> 3] >> off);
result |= (bits & mask) << (length - count);
length -= count;
Position += count;
}
return result;
}
HOPE THIS increase your performance even a bit
Hi I want to find a checksum of single string. here are the requirements of checksum.
32 digit/8byte check sum represented in hexadecimal character.
It should be XOR of header + session + body + message.
Lets suppose header + session + body + message = "This is test string". I want to calculate the checksum of this. So far I have developed below code.
Checksum is calculated correctly if string length(byte[] data) is multiple of 4.
If "data" is not a multiple of 4 I receive exception as
"System.IndexOutOfRangeException: Index was outside the bounds of the array".
I will be taking different inputs having different string length from user and hence the string length will be variable(means some time user can enter only ABCDE some times q and A and so on.). How can I fix this exception issue and calculate correct checksum with multiple of 4.
public string findchecksum(string userinput)
try
{
ASCIIEncoding enc = new ASCIIEncoding();
byte[] data = Encoding.ASCII.GetBytes(userinput);
byte[] checksum = new byte[4];
for (int i = 16; i <= data.Length - 1; i += 4)
{
checksum[0] = (byte)(checksum[0] ^ data[i]);
checksum[1] = (byte)(checksum[1] ^ data[i + 1]);
checksum[2] = (byte)(checksum[2] ^ data[i + 2]);
checksum[3] = (byte)(checksum[3] ^ data[i + 3]);
}
int check = 0;
for (int i = 0; i <= 3; i++)
{
int r = (Convert.ToInt32(checksum[i]));
int c = (-(r + (1))) & (0xff);
c <<= (24 - (i * 8));
check = (check | c);
}
return check.ToString("X");
Because you use i+3 inside your loop, your array size has to always be divisible by 4. You should extend your data array to met that requirement before entering the loop:
byte[] data = Encoding.ASCII.GetBytes(cmd);
if (data.Length % 4 != 0)
{
var data2 = new byte[(data.Length / 4 + 1) * 4];
Array.Copy(data, data2, data.Length);
data = data2;
}
For testing purpose, I need to generate a random string, which is then encoded into byte array for transferring over the Web and decoded back to a result string. The test uses NUnit framework to compare the original string with the result string. Since the encoded byte array has to be friendly for Web, it is encoded with UTF-8.
The string is encoded into a byte array by Encoder.GetBytes from UTF8Encoding. The byte array is decoded to string by Decoder.GetChars from UTF8Encoding.
The original string needs to be generated randomly and contain any sequence of characters, which can be encoded/decoded using UTF-8 encoding.
My first attempt to generate the string was:
public static String RandomString(Random rnd, Int32 length) {
StringBuilder str = new StringBuilder(length);
for (int i = 0; i < length; i++)
str.Append((char)rnd.Next(char.MinValue, char.MaxValue));
return str.ToString();
}
The above code produces strings with invalid sequences to encode.
I found some suggestions on the web and improved the code:
public static String RandomString(Random rnd, Int32 length) {
StringBuilder str = new StringBuilder(length);
for (int i = 0; i < length; i++) {
char c = (char)rnd.Next(char.MinValue, char.MaxValue);
while (c >= 0xD800 && c <= 0xDFFF)
c = (char)rnd.Next(char.MinValue, char.MaxValue);
str.Append(c);
return str.ToString();
}
The above code has no problem with encoding, but decoding the byte array fails. Furthermore, I am not sure that the code can cover all possible cases.
Any suggestions, how to generate a random string with the given requirements in C#.
UPD: using a random string in encoding/decoding:
public static Encoder Utf8Encode = new UTF8Encoding(false, true).GetEncoder();
public static Decoder Utf8Decode = new UTF8Encoding(false, true).GetDecoder();
public unsafe void TestString(Random rnd, int length, byte* byteArray,
int arrayLenght) {
int encodedLen;
String str = RandomString(rnd, length);
fixed (char* pStr = str) {
encodedLen = Utf8Encode.GetBytes(pStr, str.Length, byteArray,
arrayLenght, true);
}
char* buffer = stackalloc char[8192];
int decodedLen = Utf8Decode.GetChars(byteArray, encodedLen, buffer,
8192, true);
String res = new String(buffer, 0, decodedLen);
Assert.AreEqual(str, res);
}
I have used the code below for generating random UTF-8 character byte sequences. I can't guarantee it captures every aspect of the UTF-8 spec, but it was valuable for my testing purposes, so I'm posting it here.
private static readonly (int, int)[] HeadByteDefinitions =
{
(1 << 7, 0b0000_0000),
(1 << 5, 0b1100_0000),
(1 << 4, 0b1110_0000),
(1 << 3, 0b1111_0000)
};
static byte[] RandomUtf8Char(Random gen)
{
const int totalNumberOfUtf8Chars = (1 << 7) + (1 << 11) + (1 << 16) + (1 << 21);
int tailByteCnt;
var rnd = gen.Next(totalNumberOfUtf8Chars);
if (rnd < (1 << 7))
tailByteCnt = 0;
else if (rnd < (1 << 7) + (1 << 11))
tailByteCnt = 1;
else if (rnd < (1 << 7) + (1 << 11) + (1 << 16))
tailByteCnt = 2;
else
tailByteCnt = 3;
var (range, offset) = HeadByteDefinitions[tailByteCnt];
var headByte = Convert.ToByte(gen.Next(range) + offset);
var tailBytes = Enumerable.Range(0, tailByteCnt)
.Select(_ => Convert.ToByte(gen.Next(1 << 6) + 0b1000_0000));
return new[] {headByte}.Concat(tailBytes).ToArray();
}
I have 10 bytes - 4 bytes of low order, 4 bytes of high order, 2 bytes of highest order - that I need to convert to an unsigned long. I've tried a couple different methods but neither of them worked:
Try #1:
var id = BitConverter.ToUInt64(buffer, 0);
Try #2:
var id = GetID(buffer, 0);
long GetID(byte[] buffer, int startIndex)
{
var lowOrderUnitId = BitConverter.ToUInt32(buffer, startIndex);
var highOrderUnitId = BitConverter.ToUInt32(buffer, startIndex + 4);
var highestOrderUnitId = BitConverter.ToUInt16(buffer, startIndex + 8);
return lowOrderUnitId + (highOrderUnitId * 100000000) + (highestOrderUnitId * 10000000000000000);
}
Any help would be appreciated, thanks!
As the comments indicate, 10 bytes will not fit in a long (which is a 64-bit data type - 8 bytes). However, you could use a decimal (which is 128-bits wide - 16 bytes):
var lowOrderUnitId = BitConverter.ToUInt32(buffer, startIndex);
var highOrderUnitId = BitConverter.ToUInt32(buffer, startIndex + 4);
var highestOrderUnitId = BitConverter.ToUInt16(buffer, startIndex + 8);
decimal n = highestOrderUnitId;
n *= UInt32.MaxValue;
n += highOrderUnitId;
n *= UInt32.MaxValue;
n += lowOrderUnitId;
I've not actually tested this, but I think it will work...
As has been mentioned, a ulong isn't large enough to hold 10 bytes of data, it's only 8 bytes. You'd need to use a Decimal. The most efficient way (not to mention least code) would probably be to get a UInt64 out of it first, then add the high-order bits:
ushort high = BitConverter.ToUInt16(buffer, 0);
ulong low = BitConverter.ToUInt64(buffer, 2);
decimal num = (decimal)high * ulong.MaxValue + high + low;
(You need to add high a second time because otherwise you'd need to multiply by the value ulong.MaxValue + 1, and that's a lot of annoying casting and parentheses.)