Packing bytes manually to send on network - c#

I have an object that has the following variables:
bool firstBool;
float firstFloat; (0.0 to 1.0)
float secondFloat (0.0 to 1.0)
int firstInt; (0 to 10,000)
I was using a ToString method to get a string that I can send over the network. Scaling up I have encountered issues with the amount of data this is taking up.
the string looks like this at the moment:
"false:1.0:1.0:10000" this is 19 characters at 2 bytes per so 38 bytes
I know that I can save on this size by manually storing the data in 4 bytes like this:
A|B|B|B|B|B|B|B
C|C|C|C|C|C|C|D
D|D|D|D|D|D|D|D
D|D|D|D|D|X|X|X
A = bool(0 or 1), B = int(0 to 128), C = int(0 to 128), D = int(0 to 16384), X = Leftover bits
I convert the float(0.0 to 1.0) to int(0 to 128) since I can rebuild them on the other end and the accuracy isn't super important.
I have been experimenting with BitArray and byte[] to convert the data into and out of the binary structure.
After some experiments I ended up with this serialization process(I know it needs to be cleaned up and optimized)
public byte[] Serialize() {
byte[] firstFloatBytes = BitConverter.GetBytes(Mathf.FloorToInt(firstFloat * 128)); //Convert the float to int from (0 to 128)
byte[] secondFloatBytes = BitConverter.GetBytes(Mathf.FloorToInt(secondFloat * 128)); //Convert the float to int from (0 to 128)
byte[] firstIntData = BitConverter.GetBytes(Mathf.FloorToInt(firstInt)); // Get the bytes for the int
BitArray data = new BitArray(32); // create the size 32 bitarray to hold all the data
int i = 0; // create the index value
data[i] = firstBool; // set the 0 bit
BitArray ffBits = new BitArray(firstFloatBytes);
for(i = 1; i < 8; i++) {
data[i] = ffBits[i-1]; // Set bits 1 to 7
}
BitArray sfBits = new BitArray(secondFloatBytes);
for(i = 8; i < 15; i++) {
data[i] = sfBits[i-8]; // Set bits 8 to 14
}
BitArray fiBits = new BitArray(firstIntData);
for(i = 15; i < 29; i++) {
data[i] = fiBits[i-15]; // Set bits 15 to 28
}
byte[] output = new byte[4]; // create a byte[] to hold the output
data.CopyTo(output,0); // Copy the bits to the byte[]
return output;
}
Getting the information back out of this structure is much more complicated than getting it into this form. I figure I can probably workout something using the bitwise operators and bitmasks.
This is proving to be more complicated than I was expecting. I thought it would be very easy to access the bits of a byte[] to manipulate the data directly, extract ranges of bits, then convert back to the values required to rebuild the object. Are there best practices for this type of data serialization? Does anyone know of a tutorial or example reference I could read?

Standard and efficient serialization methods are:
Using BinaryWriter / BinaryReader:
public byte[] Serialize()
{
using(var s = new MemoryStream())
using(var w = new BinaryWriter(s))
{
w.Write(firstBool);
w.Write(firstFloat);
...
return s.ToArray();
}
}
public void Deserialize(byte[] bytes)
{
using(var s = new MemoryStream(bytes))
using(var r = new BinaryReader(s))
{
firstBool = r.ReadBool();
firstFload = r.ReadFloat();
...
}
}
Using protobuf.net
BinaryWriter / BinaryReader is much faster (around 7 times). Protobuf is more flexible, easy to use, very popular and serializes into around 33% fewer bytes. (of course these numbers are orders of magnitude and depend on what you serialize and how).
Now basically BinaryWriter will write 1 + 4 + 4 + 4 = 13 bytes. You shrink it to 5 bytes by converting the values to bool, byte, byte, short first by rounding it the way you want. Finally it's easy to merge the bool with one of your bytes to get 4 bytes if you really want to.
I don't really discourage manual serialization. But it has to be worth the price in terms of performance. The code is quite unreadable. Use bit masks and binary shifts on bytes directly but keep it as simple as possible. Don't use BitArray. It's slow and not more readable.

Here is a simple method for pack/unpack. But you loose accuracy converting a float to only 7/8 bits
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
foreach (Data data in Data.input)
{
Data.Print(data);
Data results = Data.Unpack(Data.Pack(data));
Data.Print(results);
}
Console.ReadLine();
}
}
public class Data
{
public static List<Data> input = new List<Data>() {
new Data() { firstBool = true, firstFloat = 0.2345F, secondFloat = 0.432F, firstInt = 12},
new Data() { firstBool = true, firstFloat = 0.3445F, secondFloat = 0.432F, firstInt = 11},
new Data() { firstBool = false, firstFloat = 0.2365F, secondFloat = 0.432F, firstInt = 9},
new Data() { firstBool = false, firstFloat = 0.545F, secondFloat = 0.432F, firstInt = 8},
new Data() { firstBool = true, firstFloat = 0.2367F, secondFloat = 0.432F, firstInt = 7}
};
public bool firstBool { get; set; }
public float firstFloat {get; set; } //(0.0 to 1.0)
public float secondFloat {get; set; } //(0.0 to 1.0)
public int firstInt { get; set; } //(0 to 10,000)
public static byte[] Pack(Data data)
{
byte[] results = new byte[4];
results[0] = (byte)((data.firstBool ? 0x80 : 0x00) | (byte)(data.firstFloat * 128));
results[1] = (byte)(data.secondFloat * 256);
results[2] = (byte)((data.firstInt >> 8) & 0xFF);
results[3] = (byte)(data.firstInt & 0xFF);
return results;
}
public static Data Unpack(byte[] data)
{
Data results = new Data();
results.firstBool = ((data[0] & 0x80) == 0) ? false : true;
results.firstFloat = ((float)(data[0] & 0x7F)) / 128.0F;
results.secondFloat = (float)data[1] / 256.0F;
results.firstInt = (data[2] << 8) | data[3];
return results;
}
public static void Print(Data data)
{
Console.WriteLine("Bool : '{0}', 1st Float : '{1}', 2nd Float : '{2}', Int : '{3}'",
data.firstBool,
data.firstFloat,
data.secondFloat,
data.firstInt
);
}
}
}

Related

Decreasing volume of .wav file creates heavy distortion

I have a problem that just baffles me. I import a .wav file and read them as bytes. Then I turn them into integers that I then all divide by 2 (or some other number) in order to decrease the volume. Then I make a new .wav file into which I put the new data. The result is loud and heavy distortion over the original track.
Scroll to the Main() method for the relevant (C#-)code:
using System;
using System.IO;
namespace ConsoleApp2 {
class basic {
public static byte[] bit32(int num) { //turns int into byte array of length 4
byte[] numbyt = new byte[4] { 0x00, 0x00, 0x00, 0x00 };
int pow;
for (int k = 3; k >= 0; k--) {
pow = (int)Math.Pow(16, 2*k + 1);
numbyt[k] += (byte)(16*(num/pow));
num -= numbyt[k]*(pow/16);
numbyt[k] += (byte)(num/(pow/16));
num -= (num/(pow/16))*pow/16;
}
return numbyt;
}
public static byte[] bit16(int num) { //turns int into byte array of length 2
if (num < 0) {
num += 65535;
}
byte[] numbyt = new byte[2] { 0x00, 0x00 };
int pow;
for (int k = 1; k >= 0; k--) {
pow = (int)Math.Pow(16, 2*k + 1);
numbyt[k] += (byte)(16*(num/pow));
num -= numbyt[k]*(pow/16);
numbyt[k] += (byte)(num/(pow/16));
num -= (num/(pow/16))*pow/16;
}
return numbyt;
}
public static int bitint16(byte[] numbyt) { //turns byte array of length 2 into int
int num = 0;
num += (int)Math.Pow(16, 2)*numbyt[1];
num += numbyt[0];
return num;
}
}
class wavfile: FileStream {
public wavfile(string name, int len) : base(name, FileMode.Create) {
int samplerate = 44100;
byte[] riff = new byte[] { 0x52, 0x49, 0x46, 0x46 };
this.Write(riff, 0, 4);
byte[] chunksize;
chunksize = basic.bit32(36 + len*4);
this.Write(chunksize, 0, 4);
byte[] wavebyte = new byte[4] { 0x57, 0x41, 0x56, 0x45 };
this.Write(wavebyte, 0, 4);
byte[] fmt = new byte[] { 0x66, 0x6d, 0x74, 0x20 };
this.Write(fmt, 0, 4);
byte[] subchunk1size = new byte[] { 0x10, 0x00, 0x00, 0x00 };
this.Write(subchunk1size, 0, 4);
byte[] formchann = new byte[] { 0x01, 0x00, 0x02, 0x00 };
this.Write(formchann, 0, 4);
byte[] sampleratebyte = basic.bit32(samplerate);
this.Write(sampleratebyte, 0, 4);
byte[] byterate = basic.bit32(samplerate*4);
this.Write(byterate, 0, 4);
byte[] blockalign = new byte[] { 0x04, 0x00 };
this.Write(blockalign, 0, 2);
byte[] bits = new byte[] { 0x10, 0x00 };
this.Write(bits, 0, 2);
byte[] data = new byte[] { 0x64, 0x61, 0x74, 0x61 };
this.Write(data, 0, 4);
byte[] samplesbyte = basic.bit32(len*4);
this.Write(samplesbyte, 0, 4);
}
public void sound(int[] w, int len, wavfile wavorigin = null) {
byte[] wavbyt = new byte[len*4];
for (int t = 0; t < len*2; t++) {
byte[] wavbit16 = basic.bit16(w[t]);
wavbyt[2*t] = wavbit16[0];
wavbyt[2*t + 1] = wavbit16[1];
}
this.Write(wavbyt, 0, len*4);
System.Media.SoundPlayer player = new System.Media.SoundPlayer();
player.SoundLocation = this.Name;
while (true) {
player.Play();
Console.WriteLine("repeat?");
if (Console.ReadLine() == "no") {
break;
}
}
}
}
class Program {
static void Main() {
int[] song = new int[45000*2];
byte[] songbyt = File.ReadAllBytes("name.wav"); //use your stereo, 16bits per sample wav-file
for (int t = 0; t < 45000*2; t++) {
byte[] songbytsamp = new byte[2] { songbyt[44 + 2*t], songbyt[44 + 2*t + 1] }; //I skip the header
song[t] = basic.bitint16(songbytsamp)/2; //I divide by 2 here, remove the "/2" to hear the normal sound again
//song[t] *= 2;
}
wavfile wav = new wavfile("test.wav", 45000); //constructor class that writes the header of a .wav file
wav.sound(song, 45000); //method that writes the data from "song" into the .wav file
}
}
}
The problem is not the rounding down that happens when you divide an odd number by 2; you can uncomment the line that says song[t] *= 2; and hear for yourself that all of the distortion has completely disappeared again.
I must be making a small stupid mistake somewhere, but I cannot find it. I just want to make the sound data quieter to avoid distortion when I add more sounds to it.
Well, I knew it would be something stupid, and I was right. I forgot to account for the fact that negative numbers are written in signed 16 bit language as the numbers above 2^15, and when you divide by 2, you push them into (very large) positive values. I altered my code to substract 2^16 from any number that's above 2^15 before dividing by 2. I have to thank this person though: How to reduce volume of wav stream?
If this means that my question was a duplicate, then go ahead and delete it, but I'm letting it stay for now, because someone else might find it helpful.
Using Math.Pow to do bit and byte operations is a really bad idea. That function takes double values as inputs and returns a double. It also does exponentiation (not a trivial operation). Using traditional bit shift and mask operations is clearer, much faster and less likely to introduce noise (because of the inaccuracy of doubles).
As you noticed, you really want to work with unsigned quantities (like uint/UInt32 and ushort/UInt16). Sign extension trips up everyone when doing this sort of work.
This is not a full answer to your question, but it does present a way to do the byte operations that is arguably better.
First, create a small struct to hold a combination of a bit-mask and a shift quantity:
public struct MaskAndShift {
public uint Mask {get; set;}
public int Shift {get; set;}
}
Then I create two arrays of these structs for describing what should be done to extract individual bytes from a uint or a ushort. I put them both in a static class named Worker:
public static class Worker {
public static MaskAndShift[] Mask32 = new MaskAndShift[] {
new MaskAndShift {Mask = 0xFF000000, Shift = 24},
new MaskAndShift {Mask = 0x00FF0000, Shift = 16},
new MaskAndShift {Mask = 0x0000FF00, Shift = 8},
new MaskAndShift {Mask = 0x000000FF, Shift = 0},
};
public static MaskAndShift[] Mask16 = new MaskAndShift[] {
new MaskAndShift {Mask = 0x0000FF00, Shift = 8},
new MaskAndShift {Mask = 0x000000FF, Shift = 0},
};
}
Looking at the first entry in the first array, it says "to extract the first byte from a uint, mask that uint with 0xFF000000 and shift the result 24 bits to the right". If you have endian-ness issues, you can simply re-order the entries in the array.
Then I created this static function (in the Worker class) to convert a uint / UInt32 to an array of four bytes:
public static byte[] UintToByteArray (uint input) {
var bytes = new byte[4];
int i = 0;
foreach (var maskPair in Mask32) {
var masked = input & maskPair.Mask;
if (maskPair.Shift != 0) {
masked >>= maskPair.Shift;
}
bytes[i++] = (byte) masked;
}
return bytes;
}
The code to do the same operation for a 16 bit ushort (aka UInt16) looks nearly the same (there's probably an opportunity for some refactoring here):
public static byte[] UShortToByteArray (ushort input) {
var bytes = new byte[2];
int i = 0;
foreach (var maskPair in Mask16) {
var masked = input & maskPair.Mask;
if (maskPair.Shift != 0) {
masked >>= maskPair.Shift;
}
bytes[i++] = (byte) masked;
}
return bytes;
}
The reverse operation is much simpler (however, if you have endian-ness issues, you'll need to write the code). Here I just take the entries of the array, add them into a value and shift the result:
public static uint ByteArrayToUint (byte[] bytes) {
uint result = 0;
//note that the first time through, result is zero, so shifting is a noop
foreach (var b in bytes){
result <<= 8;
result += b;
}
return result;
}
Doing this for the 16 bit version ends up being effectively the same code, so...
public static ushort ByteArrayToUshort (byte[] bytes) {
return (ushort) ByteArrayToUint(bytes);
}
Bit-twiddling never works the first time. So I wrote some test code:
public static void Main(){
//pick a nice obvious pattern
uint bit32Test = (((0xF1u * 0x100u) + 0xE2u) * 0x100u + 0xD3u) * 0x100u + 0xC4u;
Console.WriteLine("Start");
Console.WriteLine("Input 32 Value: " + bit32Test.ToString("X"));
var bytes32 = Worker.UintToByteArray(bit32Test);
foreach (var b in bytes32){
Console.WriteLine(b.ToString("X"));
}
Console.WriteLine();
ushort bit16Test = (ushort)((0xB5u * 0x100u) + 0xA6u);
Console.WriteLine("Input 16 Value: " + bit16Test.ToString("X"));
var bytes16 = Worker.UShortToByteArray(bit16Test);
foreach (var b in bytes16){
Console.WriteLine(b.ToString("X"));
}
Console.WriteLine("\r\nNow the reverse");
uint reconstitued32 = Worker.ByteArrayToUint(bytes32);
Console.WriteLine("Reconstituted 32: " + reconstitued32.ToString("X"));
ushort reconstitued16 = Worker.ByteArrayToUshort(bytes16);
Console.WriteLine("Reconstituted 16: " + reconstitued16.ToString("X"));
}
The output from that test code looks like:
Start
Input 32 Value: F1E2D3C4
F1
E2
D3
C4
Input 16 Value: B5A6
B5
A6
Now the reverse
Reconstituted 32: F1E2D3C4
Reconstituted 16: B5A6
Also note that I do everything in hexadecimal - it makes everything so much easier to read and to understand.

Unity some noise in audio after convert Byte[] to AudioClip

I'm trying to convert a base64 into a byte[] and in a audioClip but i'm getting some noise in my sound.
I'm using this class to convert byte[] to AudioClip.
using UnityEngine;
using System.Collections;
using System.IO;
using System;
namespace WWUtils.Audio{
public class WAV {
// properties
public float[] LeftChannel { get; internal set; }
public float[] RightChannel { get; internal set; }
public int ChannelCount { get; internal set; }
public int SampleCount { get; internal set; }
public int Frequency { get; internal set; }
// convert two bytes to one float in the range -1 to 1
static float bytesToFloat(byte firstByte, byte secondByte) {
// convert two bytes to one short (little endian)
short s = (short)((secondByte << 8) | firstByte);
// convert to range from -1 to (just below) 1
return s / 32768.0F;
}
static int bytesToInt(byte[] bytes,int offset=0){
int value=0;
for(int i=0;i<4;i++){
value |= ((int)bytes[offset+i])<<(i*8);
}
return value;
}
private static byte[] GetBytes(string filename){
return File.ReadAllBytes(filename);
}
// Returns left and right double arrays. 'right' will be null if sound is mono.
public WAV(string filename):
this(GetBytes(filename)) {}
public WAV(byte[] wav){
// Determine if mono or stereo
ChannelCount = wav[22]; // Forget byte 23 as 99.999% of WAVs are 1 or 2 channels
// Get the frequency
Frequency = bytesToInt(wav,24);
// Get past all the other sub chunks to get to the data subchunk:
int pos = 12; // First Subchunk ID from 12 to 16
// Keep iterating until we find the data chunk (i.e. 64 61 74 61 ...... (i.e. 100 97 116 97 in decimal))
while(!(wav[pos]==100 && wav[pos+1]==97 && wav[pos+2]==116 && wav[pos+3]==97)) {
pos += 4;
int chunkSize = wav[pos] + wav[pos + 1] * 256 + wav[pos + 2] * 65536 + wav[pos + 3] * 16777216;
pos += 4 + chunkSize;
}
pos += 8;
// Pos is now positioned to start of actual sound data.
SampleCount = (wav.Length - pos)/2; // 2 bytes per sample (16 bit sound mono)
if (ChannelCount == 2) SampleCount /= 2; // 4 bytes per sample (16 bit stereo)
// Allocate memory (right will be null if only mono sound)
LeftChannel = new float[SampleCount];
if (ChannelCount == 2) RightChannel = new float[SampleCount];
else RightChannel = null;
// Write to double array/s:
int i=0;
while (pos < wav.Length) {
LeftChannel[i] = bytesToFloat(wav[pos], wav[pos + 1]);
pos += 2;
if (ChannelCount == 2) {
RightChannel[i] = bytesToFloat(wav[pos], wav[pos + 1]);
pos += 2;
}
i++;
}
}
public override string ToString (){
return string.Format ("[WAV: LeftChannel={0}, RightChannel={1}, ChannelCount={2}, SampleCount={3}, Frequency={4}]", LeftChannel, RightChannel, ChannelCount, SampleCount, Frequency);
}
}
}
And this to play audioClip
var recByte = System.Convert.FromBase64String(base64);
WAV wav = new WAV(recByte);
AudioClip audioClip = AudioClip.Create("soundLenny-Result",
wav.SampleCount, 1, wav.Frequency, false, false);
audioClip.SetData(wav.LeftChannel, 0);
//PlayClip...
I'm trying to convert one of the following types. https://learn.microsoft.com/pt-br/azure/cognitive-services/speech-service/rest-text-to-speech#audio-outputs
with this riff-16khz-16bit-mono-pcm i get some noise on audio.
i'm tryng to convert this format riff-24khz-16bit-mono-pcm.
If i use riff-24khz-16bit-mono-pcm i get a IndexOutOfRangeException, if a use riff-16khz-16bit-mono-pcm audio is noise
This is the error.
IndexOutOfRangeException: Index was outside the bounds of the array. WWUtils.Audio.WAV..ctor (System.Byte[] wav) (at Assets/Scripts/WAV.cs:68)
If i make this with an microphone recorded wav file the script works fine. How can i correct this?

int to byte[] consistency over network

I have a struct that gets used all over the place and that I store as byteArray on the hd and also send to other platforms.
I used to do this by getting a string version of the struct and using getBytes(utf-8) and getString(utf-8) during serialization. With that I guess I avoided the little and big endian problems?
However that was quite a bit of overhead and I am now using this:
public static explicit operator byte[] (Int3 self)
{
byte[] int3ByteArr = new byte[12];//4*3
int x = self.x;
int3ByteArr[0] = (byte)x;
int3ByteArr[1] = (byte)(x >> 8);
int3ByteArr[2] = (byte)(x >> 0x10);
int3ByteArr[3] = (byte)(x >> 0x18);
int y = self.y;
int3ByteArr[4] = (byte)y;
int3ByteArr[5] = (byte)(y >> 8);
int3ByteArr[6] = (byte)(y >> 0x10);
int3ByteArr[7] = (byte)(y >> 0x18);
int z = self.z;
int3ByteArr[8] = (byte)z;
int3ByteArr[9] = (byte)(z >> 8);
int3ByteArr[10] = (byte)(z >> 0x10);
int3ByteArr[11] = (byte)(z >> 0x18);
return int3ByteArr;
}
public static explicit operator Int3(byte[] self)
{
int x = self[0] + (self[1] << 8) + (self[2] << 0x10) + (self[3] << 0x18);
int y = self[4] + (self[5] << 8) + (self[6] << 0x10) + (self[7] << 0x18);
int z = self[8] + (self[9] << 8) + (self[10] << 0x10) + (self[11] << 0x18);
return new Int3(x, y, z);
}
It works quite well for me, but I am not quite sure how little/big endian works,. do I still have to take care of something here to be safe when some other machine receives an int I sent as a bytearray?
Your current approach will not work for the case when your application running on system which use Big-Endian. In this situation you don't need reordering at all.
You don't need to reverse byte arrays by your self
And you don't need check for endianess of the system by your self
Static method IPAddress.HostToNetworkOrder will convert integer to the integer with big-endian order.
Static method IPAddress.NetworkToHostOrder will convert integer to the integer with order your system using
Those methods will check for Endianness of the system and will do/or not reordering of integers.
For getting bytes from integer and back use BitConverter
public struct ThreeIntegers
{
public int One;
public int Two;
public int Three;
}
public static byte[] ToBytes(this ThreeIntegers value )
{
byte[] bytes = new byte[12];
byte[] bytesOne = IntegerToBytes(value.One);
Buffer.BlockCopy(bytesOne, 0, bytes, 0, 4);
byte[] bytesTwo = IntegerToBytes(value.Two);
Buffer.BlockCopy(bytesTwo , 0, bytes, 4, 4);
byte[] bytesThree = IntegerToBytes(value.Three);
Buffer.BlockCopy(bytesThree , 0, bytes, 8, 4);
return bytes;
}
public static byte[] IntegerToBytes(int value)
{
int reordered = IPAddress.HostToNetworkOrder(value);
return BitConverter.GetBytes(reordered);
}
And converting from bytes to struct
public static ThreeIntegers GetThreeIntegers(byte[] bytes)
{
int rawValueOne = BitConverter.ToInt32(bytes, 0);
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = BitConverter.ToInt32(bytes, 4);
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = BitConverter.ToInt32(bytes, 8);
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
return new ThreeIntegers(valueOne, valueTwo, valueThree);
}
If you will use BinaryReader and BinaryWriter for saving and sending to another platforms then BitConverter and byte array manipulating can be dropped off.
// BinaryWriter.Write have overload for Int32
public static void SaveThreeIntegers(ThreeIntegers value)
{
using(var stream = CreateYourStream())
using (var writer = new BinaryWriter(stream))
{
int reordredOne = IPAddress.HostToNetworkOrder(value.One);
writer.Write(reorderedOne);
int reordredTwo = IPAddress.HostToNetworkOrder(value.Two);
writer.Write(reordredTwo);
int reordredThree = IPAddress.HostToNetworkOrder(value.Three);
writer.Write(reordredThree);
}
}
For reading value
public static ThreeIntegers LoadThreeIntegers()
{
using(var stream = CreateYourStream())
using (var writer = new BinaryReader(stream))
{
int rawValueOne = reader.ReadInt32();
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = reader.ReadInt32();
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = reader.ReadInt32();
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
}
}
Of course you can refactor methods above and get more cleaner solution.
Or add as extension methods for BinaryWriter and BinaryReader.
Yes you do. With changes endianness your serialization which preserves bit ordering will run into trouble.
Take the int value 385
In a bigendian system it would be stored as
000000000000000110000001
Interpreting it as littleendian would read it as
100000011000000000000000
And reverse translate to 8486912
If you use the BitConverter class there will be a book property desiring the endianness of the system. The bitconverter can also produce the bit arrays for you.
You will have to decide to use either endianness and reverse the byte arrays according to the serializing or deserializing systems endianness.
The description on MSDN is actually quite detailed. Here they use Array.Reverse for simplicity. I am not certain that your casting to/from byte in order to do the bit manipulation is in fact the fastest way of converting, but that is easily benchmarked.

C# - Cast a byte array to an array of struct and vice-versa (reverse)

I would like to save a Color[] to a file. To do so, I found that saving a byte array to a file using "System.IO.File.WriteAllBytes" should be very efficient.
I would like to cast my Color[] (array of struct) to a byte array into a safe way considering:
Potential problem of little endian / big endian (I think it is impossible to happen but want to be sure)
Having 2 differents pointer to the same memory which point to different type. Does the garbage collection will know what to do - moving objects - deleting a pointer ???
If it is possible, it would be nice to have a generic way to cast array of byte to array of any struct (T struct) and vice-versa.
If not possible, why ?
Thanks,
Eric
I think that those 2 solutions make a copy that I would like to avoid and also they both uses Marshal.PtrToStructure which is specific to structure and not to array of structure:
Reading a C/C++ data structure in C# from a byte array
How to convert a structure to a byte array in C#?
Since .NET Core 2.1, yes we can! Enter MemoryMarshal.
We will treat our Color[] as a ReadOnlySpan<Color>. We reinterpret that as a ReadOnlySpan<byte>. Finally, since WriteAllBytes has no span-based overload, we use a FileStream to write the span to disk.
var byteSpan = MemoryMarshal.AsBytes(colorArray.AsSpan());
fileStream.Write(byteSpan);
As an interesting side note, you can also experiment with the [StructLayout(LayoutKind.Explicit)] as an attribute on your fields. It allows you to specify overlapping fields, effectively allowing the concept of a union.
Here is a blog post on MSDN that illustrates this. It shows the following code:
[StructLayout(LayoutKind.Explicit)]
public struct MyUnion
{
[FieldOffset(0)]
public UInt16 myInt;
[FieldOffset(0)]
public Byte byte1;
[FieldOffset(1)]
public Byte byte2;
}
In this example, the UInt16 field overlaps with the two Byte fields.
This seems to be strongly related to what you are trying to do. It gets you very close, except for the part of writing all the bytes (especially of multiple Color objects) efficiently. :)
Regarding Array Type Conversion
C# as a language intentionally makes the process of flattening objects or arrays into byte arrays difficult because this approach goes against the principals of .NET strong typing. The conventional alternatives include several serialization tools which are generally seen a safer and more robust, or manual serialization coding such as BinaryWriter.
Having two variables of different types point to the same object in memory can only be performed if the types of the variables can be cast, implicitly or explicitly. Casting from an array of one element type to another is no trivial task: it would have to convert the internal members that keep track of things such as array length, etc.
A simple way to write and read Color[] to file
void WriteColorsToFile(string path, Color[] colors)
{
BinaryWriter writer = new BinaryWriter(File.OpenWrite(path));
writer.Write(colors.Length);
foreach(Color color in colors)
{
writer.Write(color.ToArgb());
}
writer.Close();
}
Color[] ReadColorsFromFile(string path)
{
BinaryReader reader = new BinaryReader(File.OpenRead(path));
int length = reader.ReadInt32();
Colors[] result = new Colors[length];
for(int n=0; n<length; n++)
{
result[n] = Color.FromArgb(reader.ReadInt32());
}
reader.Close();
}
You could use pointers if you really want to copy each byte and not have a copy but the same object, similar to this:
var structPtr = (byte*)&yourStruct;
var size = sizeof(YourType);
var memory = new byte[size];
fixed(byte* memoryPtr = memory)
{
for(int i = 0; i < size; i++)
{
*(memoryPtr + i) = *structPtr++;
}
}
File.WriteAllBytes(path, memory);
I just tested this and after adding the fixed block and some minor corrections it looks like it is working correctly.
This is what I used to test it:
public static void Main(string[] args)
{
var a = new s { i = 1, j = 2 };
var sPtr = (byte*)&a;
var size = sizeof(s);
var mem = new byte[size];
fixed (byte* memPtr = mem)
{
for (int i = 0; i < size; i++)
{
*(memPtr + i) = *sPtr++;
}
}
File.WriteAllBytes("A:\\file.txt", mem);
}
struct s
{
internal int i;
internal int j;
}
The result is the following:
(I manually resolved the hex bytes in the second line, only the first line was produced by the program)
public struct MyX
{
public int IntValue;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 3, ArraySubType = UnmanagedType.U1)]
public byte[] Array;
MyX(int i, int b)
{
IntValue = b;
Array = new byte[3];
}
public MyX ToStruct(byte []ar)
{
byte[] data = ar;//= { 1, 0, 0, 0, 9, 8, 7 }; // IntValue = 1, Array = {9,8,7}
IntPtr ptPoit = Marshal.AllocHGlobal(data.Length);
Marshal.Copy(data, 0, ptPoit, data.Length);
MyX x = (MyX)Marshal.PtrToStructure(ptPoit, typeof(MyX));
Marshal.FreeHGlobal(ptPoit);
return x;
}
public byte[] ToBytes()
{
Byte[] bytes = new Byte[Marshal.SizeOf(typeof(MyX))];
GCHandle pinStructure = GCHandle.Alloc(this, GCHandleType.Pinned);
try
{
Marshal.Copy(pinStructure.AddrOfPinnedObject(), bytes, 0, bytes.Length);
return bytes;
}
finally
{
pinStructure.Free();
}
}
}
void function()
{
byte[] data = { 1, 0, 0, 0, 9, 8, 7 }; // IntValue = 1, Array = {9,8,7}
IntPtr ptPoit = Marshal.AllocHGlobal(data.Length);
Marshal.Copy(data, 0, ptPoit, data.Length);
var x = (MyX)Marshal.PtrToStructure(ptPoit, typeof(MyX));
Marshal.FreeHGlobal(ptPoit);
var MYstruc = x.ToStruct(data);
Console.WriteLine("x.IntValue = {0}", x.IntValue);
Console.WriteLine("x.Array = ({0}, {1}, {2})", x.Array[0], x.Array[1], x.Array[2]);
}
Working code for reference (take care, I did not need the alpha channel in my case):
// ************************************************************************
// If someday Microsoft make Color serializable ...
//public static void SaveColors(Color[] colors, string path)
//{
// BinaryFormatter bf = new BinaryFormatter();
// MemoryStream ms = new MemoryStream();
// bf.Serialize(ms, colors);
// byte[] bytes = ms.ToArray();
// File.WriteAllBytes(path, bytes);
//}
// If someday Microsoft make Color serializable ...
//public static Colors[] LoadColors(string path)
//{
// Byte[] bytes = File.ReadAllBytes(path);
// BinaryFormatter bf = new BinaryFormatter();
// MemoryStream ms2 = new MemoryStream(bytes);
// return (Colors[])bf.Deserialize(ms2);
//}
// ******************************************************************
public static void SaveColorsToFile(Color[] colors, string path)
{
var formatter = new BinaryFormatter();
int count = colors.Length;
using (var stream = File.OpenWrite(path))
{
formatter.Serialize(stream, count);
for (int index = 0; index < count; index++)
{
formatter.Serialize(stream, colors[index].R);
formatter.Serialize(stream, colors[index].G);
formatter.Serialize(stream, colors[index].B);
}
}
}
// ******************************************************************
public static Color[] LoadColorsFromFile(string path)
{
var formatter = new BinaryFormatter();
Color[] colors;
using (var stream = File.OpenRead(path))
{
int count = (int)formatter.Deserialize(stream);
colors = new Color[count];
for (int index = 0; index < count; index++)
{
byte r = (byte)formatter.Deserialize(stream);
byte g = (byte)formatter.Deserialize(stream);
byte b = (byte)formatter.Deserialize(stream);
colors[index] = Color.FromRgb(r, g, b);
}
}
return colors;
}
// ******************************************************************

Byte array cryptography in C#

I want to create a nice cryptography using bitwise operators.
However I fail to do so.
I want it to have bitwise operators using a byte array to encrypt and decrypt my byte array.
public class Cryptographer
{
private byte[] Keys { get; set; }
public Cryptographer(string password)
{
Keys = Encoding.ASCII.GetBytes(password);
}
public void Encrypt(byte[] data)
{
for(int i = 0; i < data.Length; i++)
{
data[i] = (byte) (data[i] & Keys[i]);
}
}
public void Decrypt(byte[] data)
{
for (int i = 0; i < data.Length; i++)
{
data[i] = (byte)(Keys[i] & data[i]);
}
}
}
I know this is wrong, thats why I need help.
I simply want it to use 1 string to encrypt and decrypt all data.
This is what is sometimes known as 'craptography', because it provides the illusion of security while being functionally useless in protecting anything. Use the framework classes if you want to do cryptography right, because it's extremely difficult to roll your own.
Take a look at this for advice on what you are trying to do (encrypt/decrypt) - http://msdn.microsoft.com/en-us/library/e970bs09.aspx. Really your requirements should determine what classes you decide to use. This has good background: http://msdn.microsoft.com/en-us/library/92f9ye3s.aspx
For simple encrypt/decrypt (if this is what you need) DPAPI may be the simplest way.
You seem to be trying to implement the XOR cipher. XOR is ^ in C#:
public void Crypt(byte[] data)
{
for(int i = 0; i < data.Length; i++)
{
data[i] = (byte) (data[i] ^ Keys[i]);
} ↑
}
Since the Encrypt and Decrypt method do exactly the same, you need only one method.
Note, however, that this is just a toy and not suitable to secure data in real-world scenarios. Have a look at the System.Security.Cryptography Namespace which provides many implementations of proven algorithms. Using these correctly is still hard to get right though.
Use Xor ^ operator and not And &. Also you should not assume that data and key are the same length.
public class Cryptographer
{
private byte[] Keys { get; set; }
public Cryptographer(string password)
{
Keys = Encoding.ASCII.GetBytes(password);
}
public void Encrypt(byte[] data)
{
for(int i = 0; i < data.Length; i++)
{
data[i] = (byte) (data[i] ^ Keys[i % Keys.Length]);
}
}
public void Decrypt(byte[] data)
{
for (int i = 0; i < data.Length; i++)
{
data[i] = (byte)(Keys[i % Keys.Length] ^ data[i]);
}
}
}
static void Main(string[] args)
{
Int32 a = 138;
Console.WriteLine("first int: " + a.ToString());
byte[] bytes = BitConverter.GetBytes(a);
var bits = new BitArray(bytes);
String lol = ToBitString(bits);
Console.WriteLine("bit int: " + lol);
lol = lol.Substring(1, lol.Length - 1) + lol[0];
Console.WriteLine("left : " + lol);
byte[] bytes_new = GetBytes(lol);
byte[] key = { 12, 13, 24, 85 };
var bits2 = new BitArray(key);
String lol2 = ToBitString(bits2);
Console.WriteLine("key : " + lol2);
byte[] cryptedBytes = Crypt(bytes_new, key);
var bits3 = new BitArray(cryptedBytes);
String lol3 = ToBitString(bits3);
Console.WriteLine(" XOR: " + lol3);
byte[] deCryptedBytes = Crypt(cryptedBytes, key);
var bits4 = new BitArray(cryptedBytes);
String lol4 = ToBitString(bits4);
Console.WriteLine(" DEXOR: " + lol4);
int a_new = BitConverter.ToInt32(bytes_new, 0);
Console.WriteLine("and int: " + a_new.ToString());
Console.ReadLine();
}
public static byte[] Crypt(byte[] data, byte[] key)
{
byte[] toCrypt = data;
for (int i = 0; i < toCrypt.Length; i++)
{
toCrypt[i] = (byte)(toCrypt[i] ^ key[i]);
}
return toCrypt;
}
private static String ToBitString(BitArray bits)
{
var sb = new StringBuilder();
for (int i = bits.Count - 1; i >= 0; i--)
{
char c = bits[i] ? '1' : '0';
sb.Append(c);
}
return sb.ToString();
}
private static byte[] GetBytes(string bitString)
{
byte[] result = Enumerable.Range(0, bitString.Length / 8).
Select(pos => Convert.ToByte(
bitString.Substring(pos * 8, 8),
2)
).ToArray();
List<byte> mahByteArray = new List<byte>();
for (int i = result.Length - 1; i >= 0; i--)
{
mahByteArray.Add(result[i]);
}
return mahByteArray.ToArray();
}
Remember, there is no such thing as a 'secure' cipher. Any encryption method that can be written can be broken.
With that being said, using simple bitwise techniques for encryption is inviting a not too bright hacker to break your encryption. There are guys/gals that sit around all day long with nothing better to do.
Use one of the encryption libraries that uses a large key and do something 'unusual' to that key before using it. Even so, remember, there are people employed and not employed to do nothing but break cryptographic messages all around the world; 24 by 7.
The Germans thought they had an un-breakable system in WW II. They called it Enigma. Do some reading on it and you will discover that it was broken even before the war broke out!

Categories

Resources