Reversing galois multiplication of two byte arrays in C# - c#

I need help in finding the reverse of galois multiplication GF (2^128) in C#. The code below is being used in my AES-GCM functions. I found this code through the web though.
I tried to search the web for galois division but I have no luck in finding it.
Pardon me for my knowledge in this field and my English.
This function derives the value of 2^x.
public byte BIT(byte x)
{
return (byte)(1 << x);
}
This function converts byte array of 4 elements to unsigned int.
public uint WPA_GET_BE32(byte[] a)
{
return (uint)((a[0] << 24 )|( a[1] <<16 )|( a[2] << 8 )| a[3]);
}
This function converts unsigned int into byte array of 4 elements.
public void WPA_PUT_BE32(out byte[] a, uint val)
{
a = new byte[4];
a[0] = (byte)((val >> 24) & 0xff);
a[1] = (byte)((val >> 16) & 0xff);
a[2] = (byte)((val >> 8) & 0xff);
a[3] = (byte)(val & 0xff);
}
public void shift_right_block(ref byte[] v)
{
uint val;
byte[] temp = new byte[4];
temp = v.Skip(12).Take(4).ToArray();
val = WPA_GET_BE32(temp);
val >>= 1;
if ((v[11] & 0x01) > 0) val |= 0x80000000;
WPA_PUT_BE32(out temp, val);
Array.Copy(temp, 0, v, 12, 4);
temp = v.Skip(8).Take(4).ToArray();
val = WPA_GET_BE32(temp);
val >>= 1;
if ((v[7] & 0x01) > 0) val |= 0x80000000;
WPA_PUT_BE32(out temp, val);
Array.Copy(temp, 0, v, 8, 4);
temp = v.Skip(4).Take(4).ToArray();
val = WPA_GET_BE32(temp);
val >>= 1;
if ((v[3] & 0x01) > 0) val |= 0x80000000;
WPA_PUT_BE32(out temp, val);
Array.Copy(temp, 0, v, 4, 4);
temp = v.Skip(0).Take(4).ToArray();
val = WPA_GET_BE32(temp);
val >>= 1;
WPA_PUT_BE32(out temp, val);
Array.Copy(temp, 0, v, 0, 4);
}
This function does a exclusive-OR function on two byte arrays.
public void c_xor_16(ref byte[] dest, byte[] src)
{
int ndx = 0;
for (ndx = 0; ndx < 16; ndx++) dest[ndx] ^= src[ndx];
}
This is the main function and byte array z is the output of the GF multiplication.
public void c_gf_mult(byte[] x, byte[] y, ref byte[] z)
{
int i, j;
byte[] v = new byte[16];
z = new byte[16];
Array.Clear(z, 0, 16);
Array.Copy(y, v, 16);
for (i = 0; i < 16; i++)
{
for (j = 0; j < 8; j++)
{
if ((byte)(x[i] & BIT((byte)(7 - j))) > 0)
{
c_xor_16(ref z, v);
}
if ((byte)(v[15] & 0x01) > 0)
{
shift_right_block(ref v);
v[0] ^= 0xe1;
}
else
{
shift_right_block(ref v);
}
}
}
return;
}

Related

OpenSSL HMACSHA256 produces different result comparing to .NET

I am using C# and C++ with OpenSSL to compute HMACSHA256 has with a key and both produce different results. What am I doing wrong?
C# code:
public static string CreateSignature(string signingString, string sharedKey)
{
var key = Encoding.ASCII.GetBytes(sharedKey);
var hmac = new HMACSHA256(key);
var data = Encoding.ASCII.GetBytes(signingString);
var hash = hmac.ComputeHash(data);
return Convert.ToBase64String(hash);
}
C++ code:
std::string SignatureProvider::getSignature(std::string stringToSign, std::string key)
{
const char* pKey = key.c_str();
const char* pData = stringToSign.c_str();
unsigned char* result = nullptr;
unsigned int len = 32;
result = (unsigned char*)malloc(sizeof(char) * len);
HMAC_CTX ctx;
HMAC_CTX_init(&ctx);
HMAC_Init_ex(&ctx, pKey, strlen(pKey), EVP_sha256(), NULL);
HMAC_Update(&ctx, (unsigned char*)&pData, strlen(pData));
HMAC_Final(&ctx, result, &len);
HMAC_CTX_cleanup(&ctx);
return base64_encode(result, len);
}
std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len)
{
std::string ret;
int i = 0;
int j = 0;
unsigned char char_array_3[3];
unsigned char char_array_4[4];
while (in_len--) {
char_array_3[i++] = *(bytes_to_encode++);
if (i == 3) {
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for (i = 0; (i <4); i++)
ret += base64_chars[char_array_4[i]];
i = 0;
}
}
if (i)
{
for (j = i; j < 3; j++)
char_array_3[j] = '\0';
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
for (j = 0; (j < i + 1); j++)
ret += base64_chars[char_array_4[j]];
while ((i++ < 3))
ret += '=';
}
return ret;
}
I just included base64 conversion for completeness, but it is already different before it.
Why don't you use HMAC function itself? I have tried with this code and both C++ and c# code result in same HMAC :
std::string getSignature(std::string stringToSign, std::string key)
{
const char* pKey = key.c_str();
const char* pData = stringToSign.c_str();
unsigned char* result = nullptr;
unsigned int len = 32;
result = (unsigned char*)malloc(sizeof(char) * len);
int nkeyLen = strlen(pKey);
int dataLen = strlen(pData);
result = HMAC(EVP_sha256(), pKey, nkeyLen, (unsigned char*)pData, dataLen, NULL, NULL);
return base64_encode(result, len);
}

summing Int32 represented by byte array in c#

I have an array of audio data, which is a lot of Int32 numbers represented by array of bytes (each 4 byte element represents an Int32) and i want to do some manipulation on the data (for example, add 10 to each Int32).
I converted the bytes to Int32, do the manipulation and convert it back to bytes as in this example:
//byte[] buffer;
for (int i=0; i<buffer.Length; i+=4)
{
Int32 temp0 = BitConverter.ToInt32(buffer, i);
temp0 += 10;
byte[] temp1 = BitConverter.GetBytes(temp0);
for (int j=0;j<4;j++)
{
buffer[i + j] = temp1[j];
}
}
But I would like to know if there is a better way to do such manipulation.
You can check the .NET Reference Source for pointers (grin) on how to convert from/to big endian.
class intFromBigEndianByteArray {
public byte[] b;
public int this[int i] {
get {
i <<= 2; // i *= 4; // optional
return (int)b[i] << 24 | (int)b[i + 1] << 16 | (int)b[i + 2] << 8 | b[i + 3];
}
set {
i <<= 2; // i *= 4; // optional
b[i ] = (byte)(value >> 24);
b[i + 1] = (byte)(value >> 16);
b[i + 2] = (byte)(value >> 8);
b[i + 3] = (byte)value;
}
}
}
and sample use:
byte[] buffer = { 127, 255, 255, 255, 255, 255, 255, 255 };//big endian { int.MaxValue, -1 }
//bool check = BitConverter.IsLittleEndian; // true
//int test = BitConverter.ToInt32(buffer, 0); // -129 (incorrect because little endian)
var fakeIntBuffer = new intFromBigEndianByteArray() { b = buffer };
fakeIntBuffer[0] += 2; // { 128, 0, 0, 1 } = big endian int.MinValue - 1
fakeIntBuffer[1] += 2; // { 0, 0, 0, 1 } = big endian 1
Debug.Print(string.Join(", ", buffer)); // "128, 0, 0, 0, 1, 0, 0, 1"
For better performance you can look into parallel processing and SIMD instructions - Using SSE in C#
For even better performance, you can look into Utilizing the GPU with c#
How about the following approach:
struct My
{
public int Int;
}
var bytes = Enumerable.Range(0, 20).Select(n => (byte)(n + 240)).ToArray();
foreach (var b in bytes) Console.Write("{0,-4}", b);
// Pin the managed memory
GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned);
for (int i = 0; i < bytes.Length; i += 4)
{
// Copy the data
My my = (My)Marshal.PtrToStructure<My>(handle.AddrOfPinnedObject() + i);
my.Int += 10;
// Copy back
Marshal.StructureToPtr(my, handle.AddrOfPinnedObject() + i, true);
}
// Unpin
handle.Free();
foreach (var b in bytes) Console.Write("{0,-4}", b);
I made it just for fun.
Not sure that's less ugly.
I don't know, will it be faster? Test it.

Ror byte array with C#

is there a way to Ror an entire byte[] by a specific amount?
I've already done some research and found a solution to Rol a byte[] :
public static byte[] ROL_ByteArray(byte[] arr, int nShift)
{
//Performs bitwise circular shift of 'arr' by 'nShift' bits to the left
//RETURN:
// = Result
byte[] resArr = new byte[arr.Length];
if(arr.Length > 0)
{
int nByteShift = nShift / (sizeof(byte) * 8); //Adjusted after #dasblinkenlight's correction
int nBitShift = nShift % (sizeof(byte) * 8);
if (nByteShift >= arr.Length)
nByteShift %= arr.Length;
int s = arr.Length - 1;
int d = s - nByteShift;
for (int nCnt = 0; nCnt < arr.Length; nCnt++, d--, s--)
{
while (d < 0)
d += arr.Length;
while (s < 0)
s += arr.Length;
byte byteS = arr[s];
resArr[d] |= (byte)(byteS << nBitShift);
resArr[d > 0 ? d - 1 : resArr.Length - 1] |= (byte)(byteS >> (sizeof(byte) * 8 - nBitShift));
}
}
return resArr;
}
The author of this code can be found here: Is there a function to do circular bitshift for a byte array in C#?
Any idea how I can do the same thing but perform a Ror operation instead of a Rol operation on a byte[] ?
static byte[] ROR_ByteArray(byte[] arr, int nShift)
{
return ROL_ByteArray(arr, arr.Length*8-nShift);
}

Convert C# method to Java

I am looking to convert the following bit of C# code to Java. I having a hard time coming up with a equivalent.
Working C# Code:
private ushort ConvertBytes(byte a, byte b, bool flip)
{
byte[] buffer = new byte[] { a, b };
if (!flip)
{
return BitConverter.ToUInt16(buffer, 0);
}
ushort num = BitConverter.ToUInt16(buffer, 0);
//this.Weight = num;
int xy = 0x3720;
int num2 = 0x3720 - num;
if (num2 > -1)
{
return Convert.ToUInt16(num2);
}
return 1;
}
Here is the Java Code that does not work. The Big challenge is the "BitConverter.ToInt16(buffer,0). How do i get the Java equal of the working C# method.
Java Code that is Wrong:
private short ConvertBytes(byte a, byte b, boolean flip){
byte[] buffer = new byte[] { a, b };
if (!flip){
return (short) ((a << 8) | (b & 0xFF));
}
short num = (short) ((a << 8) | (b & 0xFF));
//this.Weight = num;
int num2 = 0x3720 - num;
if (num2 > -1){
return (short)num2;
}
return 1;
}
private short ConvertBytes(byte a, byte b, boolean flip){
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
byteBuffer.put(a);
byteBuffer.put(b);
short num = byteBuffer.getShort(0);
//this.Weight = num;
int num2 = 0x3720 - num;
if (num2 > -1){
return (short)num2;
}
return 1;
}

Improving upon bit masking and shifting function

Can this function be improved upon to make it more efficient?:
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
byte[] bytes = BitConverter.GetBytes(value);
uint myBitMask = 0x80; //MSB of 8 bits (byte)
int arrayIndex = 0;
for (int i = 0; i < bitsToMoveOver; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
myBitMask >>= 1;
}
uint outputMask1 = (uint)(1 << (bitsToGrab - 1));
uint returnVal = 0;
for (int i = 0; i < bitsToGrab; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
if ((bytes[arrayIndex] & myBitMask) > 0)
{
returnVal |= outputMask1;
}
outputMask1 >>= 1;
myBitMask >>= 1;
}
return returnVal;
}
i have an array of uints. each uint contains multiple pieces of data. In order to get the information, i pass in the number of bits, and the offset of those bits. Using that information, i build an output value.
The offset is generally on a byte boundary, but i cannot guarantee that it will be.
I'm actually really looking to see if i can simplify the code. Am i unnecessarily verbose in the code, or could it be done a bit cleaner?
Updated function: How do you guys feel about this?
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
if (bitsToGrab + bitsToMoveOver >= 32)
{
return 0;
}
byte[] bytes = BitConverter.GetBytes(value);
Array.Reverse(bytes);
uint newValue = BitConverter.ToUInt32(bytes, 0);
uint grabMask = (0xFFFFFFFF << (32 - bitsToGrab));
grabMask >>= bitsToMoveOver;
uint returnVal = (newValue & grabMask) >> (32 - bitsToMoveOver - bitsToGrab);
return returnVal;
}
This needs testing (and assumes that bitsToGrab + bitsToMoveOver <= 32), but I think you can do this:
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (value & grabMask) >> bitsToMoveOver;
Since the OP has indicated that it should be sampling bits from an internal binary representation of the number (including endian encoding), with byte order swapping within each word, you can swap bytes first like this:
uint reorderedValue = ((value << 8) & 0xFF00FF00) | ((value >> 8) & 0x00FF00FF);
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (reorderedValue & grabMask) >> bitsToMoveOver;

Categories

Resources