How would I convert this crypto from C# to C - c#

This is the C# code I use:
public void Decrypt(byte[] #in, byte[] #out, int size)
{
lock (this)
{
for (ushort i = 0; i < size; i++)
{
if (_server)
{
#out[i] = (byte)(#in[i] ^ 0xAB);
#out[i] = (byte)((#out[i] << 4) | (#out[i] >> 4));
#out[i] = (byte)(ConquerKeys.Key2[_inCounter >> 8] ^ #out[i]);
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #out[i]);
}
else
{
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #in[i]);
#out[i] = (byte)(ConquerKeys.Key2[_inCounter >> 8] ^ #out[i]);
#out[i] = (byte)((#out[i] << 4) | (#out[i] >> 4));
#out[i] = (byte)(#out[i] ^ 0xAB);
}
_inCounter = (ushort)(_inCounter + 1);
}
}
}
and this is how I converted it to work in C.
char* decrypt(char* in, int size, int server)
{
char out[size];
memset(out, 0, size);
for (int i = 0; i < size; i++)
{
if (server == 1)
{
out[i] = in[i] ^ 0xAB;
out[i] = out[i] << 4 | out[i] >> 4;
out[i] = Key2[incounter >> 8] ^ out[i];
out[i] = Key1[incounter & 0xFF] ^ in[i];
}
else if (server == 0)
{
out[i] = Key1[incounter & 0xFF] ^ in[i];
out[i] = Key2[incounter >> 8] ^ out[i];
out[i] = out[i] << 4 | out[i] >> 4;
out[i] = out[i] ^ 0xAB;
}
incounter++;
}
return out;
}
However for some reason the C one does not work.
Link for the full C# file
Link for the full C file
Link for the C implementation

There was a translation error.
The C# line:
#out[i] = (byte)(ConquerKeys.Key1[_inCounter & 0xFF] ^ #out[i]);
Became:
out[i] = Key1[incounter & 0xFF] ^ in[i];
The value on the right of the xor (^) is from the wrong array.
Additionally, you are returning a stack-allocated variable, which will cause all sorts of problem.
Change:
char out[size];
memset(out, 0, size);
to:
char *out = (char*)calloc(size, sizeof(char));

The most glaring error I see is that you are returning a pointer to a stack-allocated array, which is going to get stomped by the next function call after decrypt() returns. You need to malloc() that buffer or pass in a pointer to a writable buffer.

You are returning a reference to a local variable which is illegal. Either let the caller pass in an array or use malloc() to create an array inside the method.

I also suggest turning char into unsigned char since it is more portable. If your platform assumes char is the same as signed char, the arithmetic (bit shifts, etc) will not work right.
So just specify unsigned char explicitly (use a typedef or include <stdint.h> if unsigned char seems too long-winded for you).

Related

PHP encrypt string using blowfish

I have an application running on php 7.2 and I need to encrypt a string using the following criteria:
Cipher: NCFB
Output encoding: Base64
Initialization Vector (IV) = 8
I already know the output I should get, but my script returns different strings everything, I think because of the IV ( openssl_random_pseude_bytes), and I can't really understand the logic of it. I am not so experienced with encrypting so I can't figure this out.
$string = 'my-string';
$cipher = 'BF-CFB';
$key = 'my-secret-key';
$ivlen = openssl_cipher_iv_length($cipher);
$iv = openssl_random_pseudo_bytes($ivlen);
$encrypted = base64_encode(openssl_encrypt($string, $cipher, $key, OPENSSL_RAW_DATA, $iv));
Example
The goal of this encryption is for a API access, and there is a provided example written in C# for the encryption method. The thing is that that script generates the same string every time unlike mine. I must build my script so I get same results like the official example provided ( here is a code snippet: )
public new int Encrypt(
byte[] dataIn,
int posIn,
byte[] dataOut,
int posOut,
int count)
{
int end = posIn + count;
​
byte[] iv = this.iv;
​
int ivBytesLeft = this.ivBytesLeft;
int ivPos = iv.Length - ivBytesLeft;
​
// consume what's left in the IV buffer, but make sure to keep the new
// ciphertext in a round-robin fashion (since it represents the new IV)
if (ivBytesLeft >= count)
{
// what we have is enough to deal with the request
for (; posIn < end; posIn++, posOut++, ivPos++)
{
iv[ivPos] = dataOut[posOut] = (byte)(dataIn[posIn] ^ iv[ivPos]);
}
this.ivBytesLeft = iv.Length - ivPos;
return count;
}
for (; ivPos < BLOCK_SIZE; posIn++, posOut++, ivPos++)
{
iv[ivPos] = dataOut[posOut] = (byte)(dataIn[posIn] ^ iv[ivPos]);
}
count -= ivBytesLeft;
​
uint[] sbox1 = this.sbox1;
uint[] sbox2 = this.sbox2;
uint[] sbox3 = this.sbox3;
uint[] sbox4 = this.sbox4;
​
uint[] pbox = this.pbox;
​
uint pbox00 = pbox[0];
uint pbox01 = pbox[1];
uint pbox02 = pbox[2];
uint pbox03 = pbox[3];
uint pbox04 = pbox[4];
uint pbox05 = pbox[5];
uint pbox06 = pbox[6];
uint pbox07 = pbox[7];
uint pbox08 = pbox[8];
uint pbox09 = pbox[9];
uint pbox10 = pbox[10];
uint pbox11 = pbox[11];
uint pbox12 = pbox[12];
uint pbox13 = pbox[13];
uint pbox14 = pbox[14];
uint pbox15 = pbox[15];
uint pbox16 = pbox[16];
uint pbox17 = pbox[17];
​
// now load the current IV into 32bit integers for speed
uint hi = (((uint)iv[0]) << 24) |
(((uint)iv[1]) << 16) |
(((uint)iv[2]) << 8) |
iv[3];
​
uint lo = (((uint)iv[4]) << 24) |
(((uint)iv[5]) << 16) |
(((uint)iv[6]) << 8) |
iv[7];
​
// we deal with the even part first
int rest = count % BLOCK_SIZE;
end -= rest;
​
for (; ; )
{
// need to create new IV material no matter what
hi ^= pbox00;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox01;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox02;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox03;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox04;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox05;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox06;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox07;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox08;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox09;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox10;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox11;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox12;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox13;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox14;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox15;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox16;
​
uint swap = lo ^ pbox17;
lo = hi;
hi = swap;
​
if (posIn >= end)
{
// exit right in the middle so we always have new IV material for the rest below
break;
}
​
hi ^= (((uint)dataIn[posIn]) << 24) |
(((uint)dataIn[posIn + 1]) << 16) |
(((uint)dataIn[posIn + 2]) << 8) |
dataIn[posIn + 3];
​
lo ^= (((uint)dataIn[posIn + 4]) << 24) |
(((uint)dataIn[posIn + 5]) << 16) |
(((uint)dataIn[posIn + 6]) << 8) |
dataIn[posIn + 7];
​
posIn += 8;
​
// now stream out the whole block
dataOut[posOut] = (byte)(hi >> 24);
dataOut[posOut + 1] = (byte)(hi >> 16);
dataOut[posOut + 2] = (byte)(hi >> 8);
dataOut[posOut + 3] = (byte)hi;
​
dataOut[posOut + 4] = (byte)(lo >> 24);
dataOut[posOut + 5] = (byte)(lo >> 16);
dataOut[posOut + 6] = (byte)(lo >> 8);
dataOut[posOut + 7] = (byte)lo;
​
posOut += 8;
}
​
// store back the new IV
iv[0] = (byte)(hi >> 24);
iv[1] = (byte)(hi >> 16);
iv[2] = (byte)(hi >> 8);
iv[3] = (byte)hi;
iv[4] = (byte)(lo >> 24);
iv[5] = (byte)(lo >> 16);
iv[6] = (byte)(lo >> 8);
iv[7] = (byte)lo;
​
// emit the rest
for (int i = 0; i < rest; i++)
{
iv[i] = dataOut[posOut + i] = (byte)(dataIn[posIn + i] ^ iv[i]);
}
​
this.ivBytesLeft = iv.Length - rest;
​
return count;
}
That is what expected with your PHP code. CFB mode turns a block cipher into a stream cipher. Due to the semantical security ( or randomized encryption), you need a different IV for each encryption under the same key. Otherwise, an attacker can use two-time-pad attack as in One-Time-Pad once the attacker notices that the IV re-used.
You should always generate the IV freshly.
$iv = openssl_random_pseudo_bytes($ivlen);
Note: There is a still problem that you may generate the same IV twice for the same key if the key is used too much. The easiest mitigation from IV-reuse is using incremental IV or generating the IV's by using an LFSR this is common practice. If you are changing the key for each encryption then IV-reuse is not a problem, however, changing the IV is easier than changing the key.
Update: I've found your C# source code by just looking the comment
// consume what's left in the IV buffer, but make sure to keep the new
The author of this code says that
/// Useful if you don't want to deal with padding of blocks (in comparsion to CBC), however
/// a safe initialization vector (IV) is still needed.
This code currently insecure to use.
You can use
SetIV(value, 0);
function to init the IV with the value coming from the PHP encryption.

Converting CRC16 calculation from C to C#

I have to translate a CRC16 calculation from C to C# but get the message that I cannot implicitly convert type 'int' to 'bool' on the (crc & 0x8000) and on return (crc & 0xFFFF)
The code so far:
public unsafe short Crc16(string str)
{
short crc = 0;
for(int i = 0; i < str.Length; i++)
{
crc = (crc << 1) ^ str[i] ^ ((crc & 0x8000) ? 0x1021 : 0);
}
return (crc & 0xFFFF);
}
EDIT: Changed char parameter to string
Original C code
short Crc16( char *str )
{
short crc=0;
unsigned int i;
for (i=0; i<strlen(str); i++)
crc = (crc << 1) ^ *(str+i) ^ ((crc & 0x8000)? 0x1021:0 );
return (crc & 0xffff);
}
In C, 0 and FALSE are synonymous, and any number that is not 0 is true. (reference)
To make the conversion you would do it something like this:
public short CalcCrc16(string str)
{
short crc = 0;
int i;
unchecked
{
foreach(char c in str)
{
short exponent = (crc & 0x8000) != 0 ? 0x1021 : 0;
crc = (crc << 1) ^ ((short)c) ^ exponent);
}
return (short)(crc & 0xFFFF);
}
}
So now that we have the C code to work with, I changed the code sample I have here to match. Below is the explanation of the changes:
char* is the C equivalent of a string
The for loop is iterating over the characters *(str + i) can be rewritten as str[i], and is equivalent to C#'s str.GetChar(i)
The ternary condition is to determine whether the exponent is 0x1021 or 0.
I broke up the lines so you could see the algorithm a bit clearer.
I changed for(int i = 0; i < str.Length; i++) to a foreach on the characters because it was easier to understand.

Porting CRC16 Code in C to C# .NET

So I have this C code that I need to port to C#:
C Code:
uint16 crc16_calc(volatile uint8* bytes, uint32 length)
{
uint32 i;
uint32 j;
uint16 crc = 0xFFFF;
uint16 word;
for (i=0; i < length/2 ; i++)
{
word = ((uint16*)bytes)[i];
// upper byte
j = (uint8)((word ^ crc) >> 8);
crc = (crc << 8) ^ crc16_table[j];
// lower byte
j = (uint8)((word ^ (crc >> 8)) & 0x00FF);
crc = (crc << 8) ^ crc16_table[j];
}
return crc;
}
Ported C# Code:
public ushort CalculateChecksum(byte[] bytes)
{
uint j = 0;
ushort crc = 0xFFFF;
ushort word;
for (uint i = 0; i < bytes.Length / 2; i++)
{
word = bytes[i];
// Upper byte
j = (byte)((word ^ crc) >> 8);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
// Lower byte
j = (byte)((word ^ (crc >> 8)) & 0x00FF);
crc = (ushort)((crc << 8) ^ crc16_table[j]);
}
return crc;
}
This C algorithm calculates the CRC16 of the supplied bytes using a lookup table crc16_table[j]
However the Ported C# code does not produce the same results as the C code, am I doing something wrong?
word = ((uint16*)bytes)[i];
reads two bytes from bytes into a uint16, whereas
word = bytes[i];
just reads a single byte.
Assuming you're running on a little endian machine, your C# code could change to
word = bytes[i++];
word += bytes[i] << 8;
Or, probably better, as suggested by MerickOWA
word = BitConverter.ToInt16(bytes, i++);
Note that you could avoid the odd-looking extra increment of i by changing your loop:
for (uint i = 0; i < bytes.Length; i+=2)
{
word = BitConverter.ToInt16(bytes, i);

How to get amount of 1s from 64 bit number [duplicate]

This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}

C# to F# CRC16 ^ and ^^^ works different. How ^ works?

I found this C# code for CRC16 but I need it on F# :
using System;
public class Crc16 {
const ushort polynomial = 0xA001;
ushort[] table = new ushort[256];
public ushort ComputeChecksum(byte[] bytes) {
ushort crc = 0;
for(int i = 0; i < bytes.Length; ++i) {
byte index = (byte)(crc ^ bytes[i]);
crc = (ushort)((crc >> 8) ^ table[index]);
}
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes) {
ushort crc = ComputeChecksum(bytes);
return BitConverter.GetBytes(crc);
}
public Crc16() {
ushort value;
ushort temp;
for(ushort i = 0; i < table.Length; ++i) {
value = 0;
temp = i;
for(byte j = 0; j < 8; ++j) {
if(((value ^ temp) & 0x0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
}else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
}
}
Here is where I started :
let ComputeChecksum(bytes : byte array) =
let mutable crc = 0us
for i = 0 to bytes.Length do
let index = (crc ^^^ bytes.[i]) // ? uint16 and byte
So I think C# version is taking here first or second byte. So I want to know how C# '^' will work here ? And how can I translate this line of C# code to F# ?
This computes the same result as your C# code.
type Crc16() =
let polynomial = 0xA001us
let table = Array.init 256 (fun i ->
((0us, uint16 i), [0y..7y])
||> Seq.fold (fun (value, temp) j ->
let newValue =
match (value ^^^ temp) &&& 0x0001us with
| 0us -> value >>> 1
| _ -> ((value >>> 1) ^^^ polynomial)
newValue, temp >>> 1)
|> fst)
member __.ComputeChecksum(bytes:byte[]) =
(0us, bytes) ||> Seq.fold (fun crc byt ->
let index = byte (crc ^^^ (uint16 byt))
(crc >>> 8) ^^^ table.[int index])
C# ^ and F# ^^^ are both the XOR operator. They should work the same. Is that what you're asking?

Categories

Resources