When entering the following code into the C# immediate window, it yields some unusual results, which I can only assume are because internally, System.Guid flips certain bytes:
When using an ordinal byte array from 0 to 15
new Guid(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15})
[03020100-0504-0706-0809-0a0b0c0d0e0f]
When using a non-ordinal byte array with values 0 to 15
new Guid(new byte[] {3, 2, 1, 0, 5, 4, 7, 6, 8, 9, 10, 11, 12, 13, 14, 15})
[00010203-0405-0607-0809-0a0b0c0d0e0f]
Why are the first 3 groups flipped?
Found on Wikipedia regarding UUID.
Other systems, notably Microsoft's marshalling of UUIDs in their COM/OLE libraries, use a mixed-endian format, whereby the first three components of the UUID are little-endian, and the last two are big-endian.
For example, 00112233-4455-6677-8899-aabbccddeeff is encoded as the bytes 33 22 11 00 55 44 77 66 88 99 aa bb cc dd ee ff
The first 4 byte block belong to an Int32 value, the next 2 blocks belong to Int16 values that are assigned to the Guid in reverse because of byte order. Perhaps you should try the other constructor that has matching integer data types as parameters and gives a more intuitive ordering:
Guid g = new Guid(0xA, 0xB, 0xC,
new Byte[] { 0, 1, 2, 3, 4, 5, 6, 7 } );
Console.WriteLine("{0:B}", g);
// The example displays the following output:
// {0000000a-000b-000c-0001-020304050607}
Look at the source code of Guid.cs to see the structure behind it:
// Represents a Globally Unique Identifier.
public struct Guid : IFormattable, IComparable,
IComparable<Guid>, IEquatable<Guid> {
// Member variables
private int _a; // <<== First group, 4 bytes
private short _b; // <<== Second group, 2 bytes
private short _c; // <<== Third group, 2 bytes
private byte _d;
private byte _e;
private byte _f;
private byte _g;
private byte _h;
private byte _i;
private byte _j;
private byte _k;
...
}
As you can see, internally Guid consists of a 32-bit integer, two 16-bit integers, and 8 individual bytes. On little-endian architectures the bytes of the first int and two shorts that follow it are stored in reverse order. The order of the remaining eight bytes remains unchanged.
Related
I was wondering whether there's an easy way to implement the method I've described in the comments below
// Yields all the longs in the range [first, last] that
// are composed of the prime digits.
// For example, if the range is [4, 50] then the numbers
// yielded are 5, 7, 22, 23, 25, 27, 32, 33, 35, 35,
// although they don't necessarily need to be yieled in order.
static IEnumerable<long> PossiblesInRange(long first, long last)
{
throw new NotImplementedException();
}
The prime digits, in order, are
static long[] PrimeDigits = { 2, 3, 5, 7 };
I've been at this for a few days now. My original (and eventual) goal was to use CommonCrypto on iOS to encrypt a password with a given IV and key, then successfully decrypt it using .NET. After tons of research and failures I've narrowed down my goal to simply producing the same encrypted bytes on iOS and .NET, then going from there.
I've created simple test projects in .NET (C#, framework 4.5) and iOS (8.1). Please note the following code is not intended to be secure, but rather winnow down the variables in the larger process. Also, iOS is the variable here. The final .NET encryption code will be deployed by a client so it's up to me to bring the iOS encryption in line. Unless this is confirmed impossible the .NET code will not be changed.
The relevant .NET encryption code:
static byte[] EncryptStringToBytes_Aes(string plainText, byte[] Key, byte[] IV)
{
byte[] encrypted;
// Create an Aes object
// with the specified key and IV.
using (Aes aesAlg = Aes.Create())
{
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
// Create an encryptor to perform the stream transform.
ICryptoTransform encryptor = aesAlg.CreateEncryptor(Key, IV);
// Create the streams used for encryption.
using (MemoryStream msEncrypt = new MemoryStream())
{
using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
{
//Write all data to the stream.
swEncrypt.Write(plainText);
}
encrypted = msEncrypt.ToArray();
}
}
}
return encrypted;
}
The relevant iOS encryption code:
+(NSData*)AES256EncryptData:(NSData *)data withKey:(NSData*)key iv:(NSData*)ivector
{
Byte keyPtr[kCCKeySizeAES256+1]; // Pointer with room for terminator (unused)
// Pad to the required size
bzero(keyPtr, sizeof(keyPtr));
// fetch key data
[key getBytes:keyPtr length:sizeof(keyPtr)];
// -- IV LOGIC
Byte ivPtr[16];
bzero(ivPtr, sizeof(ivPtr));
[ivector getBytes:ivPtr length:sizeof(ivPtr)];
// Data length
NSUInteger dataLength = data.length;
// See the doc: For block ciphers, the output size will always be less than or equal to the input size plus the size of one block.
// That's why we need to add the size of one block here
size_t bufferSize = dataLength + kCCBlockSizeAES128;
void *buffer = malloc(bufferSize);
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
ivPtr,
data.bytes, dataLength,
buffer, bufferSize,
&numBytesEncrypted);
if (cryptStatus == kCCSuccess) {
return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
}
free(buffer);
return nil;
}
The relevant code for passing the pass, key, and IV in .NET and printing result:
byte[] c_IV = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
byte[] c_Key = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
String passPhrase = "X";
// Encrypt
byte[] encrypted = EncryptStringToBytes_Aes(passPhrase, c_Key, c_IV);
// Print result
for (int i = 0; i < encrypted.Count(); i++)
{
Console.WriteLine("[{0}] {1}", i, encrypted[i]);
}
The relevant code for passing the parameters and printing the result in iOS:
Byte c_iv[16] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
Byte c_key[16] = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
NSString* passPhrase = #"X";
// Convert to data
NSData* ivData = [NSData dataWithBytes:c_iv length:sizeof(c_iv)];
NSData* keyData = [NSData dataWithBytes:c_key length:sizeof(c_key)];
// Convert string to encrypt to data
NSData* passData = [passPhrase dataUsingEncoding:NSUTF8StringEncoding];
NSData* encryptedData = [CryptoHelper AES256EncryptData:passData withKey:keyData iv:ivData];
long size = sizeof(Byte);
for (int i = 0; i < encryptedData.length / size; i++) {
Byte val;
NSRange range = NSMakeRange(i * size, size);
[encryptedData getBytes:&val range:range];
NSLog(#"[%i] %hhu", i, val);
}
Upon running the .NET code it prints out the following bytes after encryption:
[0] 194
[1] 154
[2] 141
[3] 238
[4] 77
[5] 109
[6] 33
[7] 94
[8] 158
[9] 5
[10] 7
[11] 187
[12] 193
[13] 165
[14] 70
[15] 5
Conversely, iOS prints the following after encryption:
[0] 77
[1] 213
[2] 61
[3] 190
[4] 197
[5] 191
[6] 55
[7] 230
[8] 150
[9] 144
[10] 5
[11] 253
[12] 253
[13] 158
[14] 34
[15] 138
I cannot for the life of me determine what is causing this difference. Some things I've already confirmed:
Both iOS and .NET can successfully decrypt their encrypted data.
The lines of code in the .NET project:
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
Do not affect the result. They can be commented and the output is the same. I assume this means they are the default valus. I've only left them in to make it obvious I'm matching iOS's encryption properties as closely as possible for this example.
If I print out the bytes in the iOS NSData objects "ivData" and "keyData" it produces the same list of bytes that I created them with- so I don't think this is a C <-> ObjC bridging problem for the initial parameters.
If I print out the bytes in the iOS variable "passData" it prints the same single byte as .NET (88). So I'm fairly certain they are starting the encryption with the exact same data.
Due to how concise the .NET code is I've run out of obvious avenues of experimentation. My only thought is that someone may be able to point out a problem in my "AES256EncryptData:withKey:iv:" method. That code has been modified from the ubiquitous iOS AES256 code floating around because the key we are provided is a byte array- not a string. I'm pretty studied at ObjC but not nearly as comfortable with the C nonsense- so it's certainly possible I've fumbled the required modifications.
All help or suggestions would be greatly appreciated.
I notice you are using AES256 but have a 128-bit key! 16-bytes x 8-bits. You can not count on various functions to pad a key the same, that is undefined.
You're likely dealing with an issue of string encoding. In your iOS code I see that you are passing the string as UTF-8, which would result in a one-byte string of "X". .NET by default uses UTF-16, which means you have a two-byte string of "X".
You can use How to convert a string to UTF8? to convert your string to a UTF-8 byte array in .NET. You can try writing out the byte array of the plain-text string in both cases to determine that you are in fact passing the same bytes.
I have a sequence of objects, that each have a sequence number that goes from 0 to ushort.MaxValue (0-65535). I have at max about 10 000 items in my sequence, so there should not be any duplicates, and the items are mostly sorted due to the way they are loaded. I only need to access the data sequentially, I don't need them in a list, if that can help. It is also something that is done quite frequently, so it cannot have a too high Big-O.
What is the best way to sort this list?
An example sequence could be (in this example, assume the sequence number is a single byte and wraps at 255):
240 241 242 243 244 250 251 245 246 248 247 249 252 253 0 1 2 254 255 3 4 5 6
The correct order would then be
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 0 1 2 3 4 5 6
I have a few different approaches, including making a array of ushort.MaxValue size, and just incrementing the position, but that seems like a very inefficient way, and I have some problems when the data I receive have a jump in sequence. However, it's O(1) in performance..
Another approach is to order the items normally, then find the split (6-240), and move the first items to the end. But I'm not sure if that is a good idea.
My third idea is to loop the sequence, until I find a wrong sequence number, look ahead until I find the correct one, and move it to its correct position. However, this can potentially be quite slow if there is a wrong sequence number early on.
Is this what you are looking for?
var groups = ints.GroupBy(x => x < 255 / 2)
.OrderByDescending(list => list.ElementAt(0))
.Select(x => x.OrderBy(u => u))
.SelectMany(i => i).ToList();
Example
In:
int[] ints = new int[] { 88, 89, 90, 91, 92, 0, 1, 2, 3, 92, 93, 94, 95, 96, 97, 4, 5, 6, 7, 8, 99, 100, 9, 10, 11, 12, 13 };
Out:
88 89 90 91 92 92 93 94 95 96 97 99 100 0 1 2 3 4 5 6 7 8 9 10 11 12 13
I realise this is an old question byte I also needed to do this and would have liked an answer so...
Use a SortedSet<FileData> with a custom comparer;
where FileData contains information about the files you are working with
e.g.
struct FileData
{
public ushort SequenceNumber;
...
}
internal class Sequencer : IComparer<FileData>
{
public int Compare(FileData x, FileData y)
{
ushort comparer = (ushort)(x.SequenceNumber - y.SequenceNumber);
if (comparer == 0) return 0;
if (comparer < ushort.MaxValue / 2) return 1;
return -1;
}
}
As you read file information from disk add them to your SortedSet
When you read them out of the SortedSet they are now in the correct order
Note that the SortedSet uses a Red-Black Internally which should give you a nice balance between performance and memory
Insertion is O(log n)
Traversal is O(n)
I've tried copying arrays in such a way I can crunch data in an array with threads but obviously without splitting the array into smaller chunks (lets say 1 array -> 4 quarters (4 arrays)).
The only method I can find copies from a specified (int)start point and copies all leading data from the start to the end which if I am using multiple threads to crunch the data its nullifies the point of threading.
Here is pseudo code to show what I wish to do.
int array { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }
int split1 { 0, 1, 2, 3 }
int split2 { 4, 5, 6, 7 }
int split3 { 8, 9, 10, 11 }
int split4 { 12, 13, 14, 15 }
or lets say the length of the array cant be split up evenly
int array { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }
int split1 { 0, 1, 2, 3 }
int split2 { 4, 5, 6, 7 }
int split3 { 8, 9, 10, 11 }
int split4 { 12, 13, 14, 15, 16}
The only method I can find copies from a specified (int)start point and copies all leading data from the start to the end which if I am using multiple threads to crunch the data its nullifies the point of threading.
It's a shame you didn't show which method that was. Array.Copy has various overloads for copying part of an array to another array. This one is probably the most helpful:
public static void Copy(
Array sourceArray,
int sourceIndex,
Array destinationArray,
int destinationIndex,
int length
)
Alternatively, look at Buffer.BlockCopy, which has basically the same signature - but the values are all in terms of bytes rather than array indexes. It also only works with arrays of primitives.
Another alternative would be not to create copies of the array at all - if each thread knows which segment of the array it should work with, it can access that directly. You should also look into Parallel.ForEach (and similar methods) as a way of parallelizing operations easily at a higher level.
I have a function that receives a power of two value.
I need to convert it to an enum range (0, 1, 2, 3, and so on), and then shift it back to the power of two range.
0 1
1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
9 512
10 1024
... and so on.
If my function receives a value of 1024, I need to convert it to 10. What is the best way to do this in C#? Should I just keep dividing by 2 in a loop and count the iterations?
I know I can put it back with (1 << 10).
Just use the logarithm of base 2:
Math.Log(/* your number */, 2)
For example, Math.Log(1024, 2) returns 10.
Update:
Here's a rather robust version that checks if the number passed in is a power of two:
public static int Log2(uint number)
{
var isPowerOfTwo = number > 0 && (number & (number - 1)) == 0;
if (!isPowerOfTwo)
{
throw new ArgumentException("Not a power of two", "number");
}
return (int)Math.Log(number, 2);
}
The check for number being a power of two is taken from http://graphics.stanford.edu/~seander/bithacks.html#DetermineIfPowerOf2
There are more tricks to find log2 of an integer on that page, starting here:
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
This is the probably fastest algorithm when your CPU doesn't have a bit scan instruction or you can't access that instruction:
unsigned int v; // find the number of trailing zeros in 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
See this paper if you want to know how it works, basically, it's just a perfect hash.
Use _BitScanForward. It does exactly this.