jBCrypt 0.3 C# Port (BCrypt.net) - c#

After looking into a bug in the original jBCrypt v0.1 C# port: BCrypt.net (Related Question). I decided to compare the new jBCrypt code against the old C# port to look for discrepancies and potential issues like the related question's bug.
Here is what I've found:
// original java (jBCrypt v0.3):
private static int streamtoword(byte data[], int offp[]) {
int i;
int word = 0;
int off = offp[0];
for (i = 0; i < 4; i++) {
word = (word << 8) | (data[off] & 0xff);
off = (off + 1) % data.length;
}
offp[0] = off;
return word;
}
// port to C# :
private static uint StreamToWord(byte[] data, ref int offset)
{
uint word = 0;
for (int i = 0; i < 4; i++)
{
// note the difference with the omission of "& 0xff"
word = (word << 8) | data[offset];
offset = (offset + 1) % data.Length;
}
return word;
}
if the prior is incorrect would the following fix it?
private static uint StreamToWord(byte[] data, ref int[] offsetp)
{
uint word = 0;
int offset = offsetp[0];
for (int i = 0; i < 4; i++)
{
word = (word << 8) | (uint)(data[offset] & 0xff);
offset = (offset + 1) % data.Length;
}
offsetp[0] = offset;
return word;
}

The & 0xff is required in the Java version because in Java, bytes are signed. (Some argue that this is a bug.)
In C#, bytes are unsigned, so the & 0xff is unnecessary.

Related

How to edit this code to calculate CRC-32/MPEG in C#

I have this code which calculate CRC-32, I need to edit this code with: Polynomial 0x04C11DB7 ,Initial value: 0xFFFFFFFF , XOR:0 .
So CRC32 for string "123456789" should be"0376E6E7", I found a code, it's very slow , But it works any way.
```internal static class Crc32
{
internal static uint[] MakeCrcTable()
{
uint c;
uint[] crcTable = new uint[256];
for (uint n = 0; n < 256; n++)
{
c = n;
for (int k = 0; k < 8; k++)
{
var res = c & 1;
c = (res == 1) ? (0xEDB88320 ^ (c >> 1)) : (c >> 1);
}
crcTable[n] = c;
}
return crcTable;
}
internal static uint CalculateCrc32(byte[] str)
{
uint[] crcTable = Crc32.MakeCrcTable();
uint crc = 0xffffffff;
for (int i = 0; i < str.Length; i++)
{
byte c = str[i];
crc = (crc >> 8) ^ crcTable[(crc ^ c) & 0xFF];
}
return ~crc; //(crc ^ (-1)) >> 0;
}
}```
Based on the added comments, what you are looking for is CRC-32/MPEG-2, which reverses the direction of the CRC, and eliminates the final exclusive-or, compared to the implementation you have, which is a CRC-32/ISO-HDLC.
To get there, you need to flip the CRC from reflected to forward. You bit-flip the polynomial to get 0x04c11db7, check the high bit instead of the low bit, reverse the shifts, both in the table generation and use of the table, and exclusive-or with the high byte of the CRC instead of the low byte.
To remove the final exclusive-or, remove the tilde at the end.

Method that returns uint16_t array properly in C++ [duplicate]

This question already has answers here:
How to return an array from a function?
(5 answers)
Closed 3 years ago.
I have my C# code that returns uint array but I want to do it in C++. I looked other posts; they use uint pointer array where my array is not. Does anyone know how to return uint16_t array properly?
This is C# code works fine
public static UInt16[] GetIntArrayFromByteArray(byte[] byteArray)
{
if ((byteArray.Length % 2) == 1)
Array.Resize(ref byteArray, byteArray.Length + 1);
UInt16[] intArray = new UInt16[byteArray.Length / 2];
for (int i = 0; i < byteArray.Length; i += 2)
intArray[i / 2] = (UInt16)((byteArray[i] << 8) | byteArray[i + 1]);
return intArray;
}
This is C++ code that creates syntax error
uint16_t[] GetIntArrayFromByteArray(byte[] byteArray)
{
//if ((byteArray.Length % 2) == 1)
//Array.Resize(ref byteArray, byteArray.Length + 1);
uint16_t[] intArray = new uint16_t[10];
for (int i = 0; i < 10; i += 2)
intArray[i / 2] = (uint16_t)((byteArray[i] << 8) | byteArray[i + 1]);
return intArray;
}
Do not use Type[] ever. Use std::vector:
std::vector<uint16_t> GetIntArrayFromByteArray(std::vector<byte> byteArray)
{
// If the number of bytes is not even, put a zero at the end
if ((byteArray.size() % 2) == 1)
byteArray.push_back(0);
std::vector<uint16_t> intArray;
for (int i = 0; i < byteArray.size(); i += 2)
intArray.push_back((uint16_t)((byteArray[i] << 8) | byteArray[i + 1]));
return intArray;
}
You can also use std::array<Type, Size> if the array would be fixed size.
More optimal version (thanks to #Aconcagua) (demo)
Here is a full code with more optimal version that doesn't copy or alter the input. This is better if you'll have long input arrays. It's possible to write it shorter, but I wanted to keep it verbose and beginner-friendly.
#include <iostream>
#include <vector>
using byte = unsigned char;
std::vector<uint16_t> GetIntArrayFromByteArray(const std::vector<byte>& byteArray)
{
const int inputSize = byteArray.size();
const bool inputIsOddCount = inputSize % 2 != 0;
const int finalSize = (int)(inputSize/2.0 + 0.5);
// Ignore the last odd item in loop and handle it later
const int loopLength = inputIsOddCount ? inputSize - 1 : inputSize;
std::vector<uint16_t> intArray;
// Reserve space for all items
intArray.reserve(finalSize);
for (int i = 0; i < loopLength; i += 2)
{
intArray.push_back((uint16_t)((byteArray[i] << 8) | byteArray[i + 1]));
}
// If the input was odd-count, we still have one byte to add, along with a zero
if(inputIsOddCount)
{
// The zero in this expression is redundant but illustrative
intArray.push_back((uint16_t)((byteArray[inputSize-1] << 8) | 0));
}
return intArray;
}
int main() {
const std::vector<byte> numbers{2,0,0,0,1,0,0,1};
const std::vector<uint16_t> result(GetIntArrayFromByteArray(numbers));
for(uint16_t num: result) {
std::cout << num << "\n";
}
return 0;
}

Calculating CRC16 in C#

I'm trying to port an old code from C to C# which basically receives a string and returns a CRC16 of it...
The C method is as follow:
#define CRC_MASK 0x1021 /* x^16 + x^12 + x^5 + x^0 */
UINT16 CRC_Calc (unsigned char *pbData, int iLength)
{
UINT16 wData, wCRC = 0;
int i;
for ( ;iLength > 0; iLength--, pbData++) {
wData = (UINT16) (((UINT16) *pbData) << 8);
for (i = 0; i < 8; i++, wData <<= 1) {
if ((wCRC ^ wData) & 0x8000)
wCRC = (UINT16) ((wCRC << 1) ^ CRC_MASK);
else
wCRC <<= 1;
}
}
return wCRC;
}
My ported C# code is this:
private static ushort Calc(byte[] data)
{
ushort wData, wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wData = Convert.ToUInt16(data[i] << 8);
for (int j = 0; j < 8; j++, wData <<= 1)
{
var a = (wCRC ^ wData) & 0x8000;
if ( a != 0)
{
var c = (wCRC << 1) ^ 0x1021;
wCRC = Convert.ToUInt16(c);
}
else
{
wCRC <<= 1;
}
}
}
return wCRC;
}
The test string is "OPN"... It must return a uint which is (ofc) 2 bytes A8 A9 and the #CRC_MASK is the polynomial for that calculation. I did found several examples of CRC16 here and around the web, but none of them achieve this result since this CRC calculation must match the one that the device we are connecting to.
WHere is the mistake? I really appreciate any help.
Thanks! best regards
Gutemberg
UPDATE
Following the answer from #rcgldr, I put together the following sample:
_serial = new SerialPort("COM6", 19200, Parity.None, 8, StopBits.One);
_serial.Open();
_serial.Encoding = Encoding.GetEncoding(1252);
_serial.DataReceived += Serial_DataReceived;
var msg = "OPN";
var data = Encoding.GetEncoding(1252).GetBytes(msg);
var crc = BitConverter.GetBytes(Calc(data));
var msb = crc[0].ToString("X");
var lsb = crc[1].ToString("X");
//The following line must be something like: \x16OPN\x17\xA8\xA9
var cmd = string.Format(#"{0}{1}{2}\x{3}\x{4}", SYN, msg, ETB, msb, lsb);
//var cmd = "\x16OPN\x17\xA8\xA9";
_serial.Write(cmd);
The value of the cmd variable is what I'm trying to send to the device. If you have a look the the commented cmd value, this is a working string. The 2 bytes of the CRC16, goes in the last two parameters (msb and lsb). So, in the sample here, msb MUST be "\xA8" and lsb MUST be "\xA9" in order to the command to work(the CRC16 match on the device).
Any clues?
Thanks again.
UPDATE 2
For those who fall in the same case were you need to format the string with \x this is what I did to get it working:
protected string ToMessage(string data)
{
var msg = data + ETB;
var crc = CRC16.Compute(msg);
var fullMsg = string.Format(#"{0}{1}{2:X}{3:X}", SYN, msg, crc[0], crc[1]);
return fullMsg;
}
This return to me the full message that I need inclusing the \x on it. The SYN variable is '\x16' and ETB is '\x17'
Thank you all for the help!
Gutemberg
The problem here is that the message including the ETB (\x17) is 4 bytes long (the leading sync byte isn't used for the CRC): "OPN\x17" == {'O', 'P', 'N', 0x17}, which results in a CRC of {0xA8, 0xA9} to be appended to the message. So the CRC function is correct, but the original test data wasn't including the 4th byte which is 0x17.
This is a working example (at least with VS2015 express).
private static ushort Calc(byte[] data)
{
ushort wCRC = 0;
for (int i = 0; i < data.Length; i++)
{
wCRC ^= (ushort)(data[i] << 8);
for (int j = 0; j < 8; j++)
{
if ((wCRC & 0x8000) != 0)
wCRC = (ushort)((wCRC << 1) ^ 0x1021);
else
wCRC <<= 1;
}
}
return wCRC;
}

Improving upon bit masking and shifting function

Can this function be improved upon to make it more efficient?:
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
byte[] bytes = BitConverter.GetBytes(value);
uint myBitMask = 0x80; //MSB of 8 bits (byte)
int arrayIndex = 0;
for (int i = 0; i < bitsToMoveOver; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
myBitMask >>= 1;
}
uint outputMask1 = (uint)(1 << (bitsToGrab - 1));
uint returnVal = 0;
for (int i = 0; i < bitsToGrab; i++)
{
if (myBitMask == 0)
{
arrayIndex++;
myBitMask = 0x80;
}
if ((bytes[arrayIndex] & myBitMask) > 0)
{
returnVal |= outputMask1;
}
outputMask1 >>= 1;
myBitMask >>= 1;
}
return returnVal;
}
i have an array of uints. each uint contains multiple pieces of data. In order to get the information, i pass in the number of bits, and the offset of those bits. Using that information, i build an output value.
The offset is generally on a byte boundary, but i cannot guarantee that it will be.
I'm actually really looking to see if i can simplify the code. Am i unnecessarily verbose in the code, or could it be done a bit cleaner?
Updated function: How do you guys feel about this?
private unsafe uint GetValue(uint value, int bitsToGrab, int bitsToMoveOver)
{
if (bitsToGrab + bitsToMoveOver >= 32)
{
return 0;
}
byte[] bytes = BitConverter.GetBytes(value);
Array.Reverse(bytes);
uint newValue = BitConverter.ToUInt32(bytes, 0);
uint grabMask = (0xFFFFFFFF << (32 - bitsToGrab));
grabMask >>= bitsToMoveOver;
uint returnVal = (newValue & grabMask) >> (32 - bitsToMoveOver - bitsToGrab);
return returnVal;
}
This needs testing (and assumes that bitsToGrab + bitsToMoveOver <= 32), but I think you can do this:
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (value & grabMask) >> bitsToMoveOver;
Since the OP has indicated that it should be sampling bits from an internal binary representation of the number (including endian encoding), with byte order swapping within each word, you can swap bytes first like this:
uint reorderedValue = ((value << 8) & 0xFF00FF00) | ((value >> 8) & 0x00FF00FF);
uint grabMask = ~(0xFFFFFFFF << (bitsToGrab + bitsToMoveOver));
return (reorderedValue & grabMask) >> bitsToMoveOver;

Converting a range into a bit array

I'm writing a time-critical piece of code in C# that requires me to convert two unsigned integers that define an inclusive range into a bit field. Ex:
uint x1 = 3;
uint x2 = 9;
//defines the range [3-9]
// 98 7654 3
//must be converted to: 0000 0011 1111 1000
It may help to visualize the bits in reverse order
The maximum value for this range is a parameter given at run-time which we'll call max_val. Therefore, the bit field variable ought to be defined as a UInt32 array with size equal to max_val/32:
UInt32 MAX_DIV_32 = max_val / 32;
UInt32[] bitArray = new UInt32[MAX_DIV_32];
Given a range defined by the variables x1 and x2, what is the fastest way to perform this conversion?
Try this. Calculate the range of array items that must be filled with all ones and do this by iterating over this range. Finally set the items at both borders.
Int32 startIndex = x1 >> 5;
Int32 endIndex = x2 >> 5;
bitArray[startIndex] = UInt32.MaxValue << (x1 & 31);
for (Int32 i = startIndex + 1; i <= endIndex; i++)
{
bitArray[i] = UInt32.MaxValue;
}
bitArray[endIndex] &= UInt32.MaxValue >> (31 - (x2 & 31));
May be the code is not 100% correct, but the idea should work.
Just tested it and found three bugs. The calculation at start index required a mod 32 and at end index the 32 must be 31 and a logical and instead of a assignment to handle the case of start and end index being the same. Should be quite fast.
Just benchmarked it with equal distribution of x1 and x2 over the array.
Intel Core 2 Duo E8400 3.0 GHz, MS VirtualPC with Server 2003 R2 on Windows XP host.
Array length [bits] 320 160 64
Performance [executions/s] 33 million 43 million 54 million
One more optimazation x % 32 == x & 31 but I am unable to meassure a performance gain. Because of only 10.000.000 iterations in my test the fluctuations are quite high. And I am running in VirtualPC making the situation even more unpredictable.
My solution for setting a whole range of bits in a BitArray to true or false:
public static BitArray SetRange(BitArray bitArray, Int32 offset, Int32 length, Boolean value)
{
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
var firstInt = offset >> 5;
var lastInt = (offset + length) >> 5;
Int32 mask = 0;
if (value)
{
// set first and last int
mask = (-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] |= ~(-1 << ((offset + length) & 31));
else
mask &= ~(-1 << ((offset + length) & 31));
ints[firstInt] |= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = -1;
}
else
{
// set first and last int
mask = ~(-1 << (offset & 31));
if (lastInt != firstInt)
ints[lastInt] &= -1 << ((offset + length) & 31);
else
mask |= -1 << ((offset + length) & 31);
ints[firstInt] &= mask;
// set all ints in between
for (Int32 i = firstInt + 1; i < lastInt; i++)
ints[i] = 0;
}
return new BitArray(ints) { Length = bitArray.Length };
}
You could try:
UInt32 x1 = 3;
UInt32 x2 = 9;
UInt32 newInteger = (UInt32)(Math.Pow(2, x2 + 1) - 1) &
~(UInt32)(Math.Pow(2, x1)-1);
Is there a reason not to use the System.Collections.BitArray class instead of a UInt32[]? Otherwise, I'd try something like this:
int minIndex = (int)x1/32;
int maxIndex = (int)x2/32;
// first handle the all zero regions and the all one region (if any)
for (int i = 0; i < minIndex; i++) {
bitArray[i] = 0;
}
for (int i = minIndex + 1; i < maxIndex; i++) {
bitArray[i] = UInt32.MaxValue; // set to all 1s
}
for (int i = maxIndex + 1; i < MAX_DIV_32; i++) {
bitArray[i] = 0;
}
// now handle the tricky parts
uint maxBits = (2u << ((int)x2 - 32 * maxIndex)) - 1; // set to 1s up to max
uint minBits = ~((1u << ((int)x1 - 32 * minIndex)) - 1); // set to 1s after min
if (minIndex == maxIndex) {
bitArray[minIndex] = maxBits & minBits;
}
else {
bitArray[minIndex] = minBits;
bitArray[maxIndex] = maxBits;
}
I was bored enough to try doing it with a char array and using Convert.ToUInt32(string, int) to convert to a uint from base 2.
uint Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
return Convert.ToUInt32(new string(buffer), 2);
}
A simple benchmark shows that my method is about 5% faster than Angrey Jim's (even if you replace second Pow with a bit shift.)
It is probably the easiest to convert to producing a uint array if the upper bound is too big to fit into a single int. It's a little cryptic but I believe it works.
uint[] Range(int l, int h)
{
char[] buffer = new char[h];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = i < h - l ? '1' : '0';
}
int bitsInUInt = sizeof(uint) * 8;
int numNeededUInts = (int)Math.Ceiling((decimal)buffer.Length /
(decimal)bitsInUInt);
uint[] uints = new uint[numNeededUInts];
for (int j = uints.Length - 1, s = buffer.Length - bitsInUInt;
j >= 0 && s >= 0;
j--, s -= bitsInUInt)
{
uints[j] = Convert.ToUInt32(new string(buffer, s, bitsInUInt), 2);
}
int remainder = buffer.Length % bitsInUInt;
if (remainder > 0)
{
uints[0] = Convert.ToUInt32(new string(buffer, 0, remainder), 2);
}
return uints;
}
Try this:
uint x1 = 3;
uint x2 = 9;
int cbToShift = x2 - x1; // 6
int nResult = ((1 << cbToShift) - 1) << x1;
/*
(1<<6)-1 gives you 63 = 111111, then you shift it on 3 bits left
*/

Categories

Resources