Convert into to byte array and insert into other array - c#

I work in a c# wpf application in which I want to do several things. I'm working with byte arrays to compose MIDI Show Control messages (specified in the MSC Specification 1.0).
The structure of this message is that a 0x00 byte is like a comma between all the parts of the message. I compose a message like this:
byte[] data =
{(byte)0xF0, // SysEx
(byte)0x7F, // Realtime
(byte)0x7F, // Device id
(byte)0x02, // Constant
(byte)0x01, // Lighting format
(commandbyte), // GO
(qnumber), // qnumber
(byte)0x00, // comma
(qlist), // qlist
(byte)0x00, // comma
(byte)0xF7, // End of SysEx
};
I want the user to fill in unsigned integers (like 215.5) and I want to convert these numbers to bytes (without 0x00 bytes because then the message is interpreted wrong).
What is the best way to convert the numbers and place the byte array in the places mentioned above?

You might want to take a look at the BitConverter class, which is designed to convert base types into byte arrays.
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
But I'm not sure what guidance you are seeking for placing the items into your array. Array.Copy can work to simply copy the bytes in, but maybe I am misunderstanding.

Found it out like this:
Used someone else's converter code like this:
static byte[] VlqEncode(int value)
{
uint uvalue = (uint)value;
if (uvalue < 128)
return new byte[] { (byte)uvalue };
// simplest case
// calculate length of buffer required
int len = 0;
do
{
uvalue >>= 7;
} while (uvalue != 0);
// encode
uvalue = (uint)value;
byte[] buffer = new byte[len];
int offset = 0;
do { buffer[offset] = (byte)(uvalue & 127);
// only the last 7 bits
uvalue >>= 7; if(uvalue != 0) buffer[offset++] |= 128;
// continuation bit
} while (uvalue != 0);
return buffer;
}
Then I use this to convert the integer:
byte[] mybytearray = VlqEncode(integer);
I then make a new arraylist in which I add each item in sequence:
ArrayList mymessage = new ArrayList();
foreach(byte uvalue in mymessage)
{
mymessage.Add((byte)uvalue);
}
mymessage.Add((byte)0x00);
`
and so on until I have the correct message. I then only have to convert this a byte array like this:
byte[] data = new byte[mymessage.count];
data = (byte[])mymessage.ToArray(typeof(byte));`

Related

Indicating the end of a raw data chunk in an RLE algorithm that can contain all byte values

I'm writing an RLE algorithm in C# that can work on any file as input. The approach to encoding I'm taking is as follows:
An RLE packet contains 1 byte for the length and 1 byte for the value. For example, if the byte 0xFF appeared 3 times in a row, 0x03 0xFF would be written to the file.
If representing the data as raw data would be more efficient, I use 0x00 as a terminator. This works because the length of a packet can never be zero. If I wanted to add the bytes 0x53 0x2C 0x01 to my compressed file it would look like this:
0x03 0xFF 0x00 0x53 0x2C 0x01
However a problem arises when trying to switch back to RLE packets. I can't use a byte as a terminator like I did for switching onto raw data because any byte value from 0x00 to 0xFF can be in the input data, and when decoding the bytes the decoder would misinterpret the byte as a terminator and ruin everything.
What can I do to indicate that I have to switch back to RLE packets when it can't be written as data in the file?
Here is my code if it helps:
private static void RunLengthEncode(ref byte[] bytes)
{
// Create a list to store the bytes
List<byte> output = new List<byte>();
byte runLengthByte;
int runLengthCounter = 0;
// Set the RLE byte to the first byte in the array and increment the RLE counter
runLengthByte = bytes[0];
// For each byte in the input array...
for (int i = 0; i < bytes.Length; i++)
{
if (runLengthByte == bytes[i] || runLengthCounter == 255)
{
runLengthCounter++;
}
else
{
// RLE packets under 3 should be written as raw data to avoid increasing the file size
if (runLengthCounter < 3)
{
// Add a 0x00 to indicate raw data
output.Add(0x00);
// Add the bytes that were skipped while counting the run length
for (int j = i - runLengthCounter; j < i; j++)
{
output.Add(bytes[j]);
}
}
else
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
runLengthCounter = 1;
runLengthByte = bytes[i];
}
// Add the last bytes to the list when finishing
if (i == bytes.Length - 1)
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
}
// Set the bytes to the RLE encoded data
bytes = output.ToArray();
}
Also if you want to comment and say that RLE isn't very efficient for binary data, I know it isn't. This is a project I'm doing to implement many kinds of compression to learn about them, not for an actual product.
Any help would be appreciated! Thanks!
There are many ways to unambiguously encode run-lengths. One simple way is, when decoding: if you see two equal bytes in a row, then the next byte is a a count of repeats of that byte after those first two. I.e. 0..255 additional repeats, so encoding runs of 2..257. (There's no point in encoding runs of 0 or 1.)

Read and write more than 8 bit symbols

I am trying to write an Encoded file.The file has 9 to 12 bit symbols. While writing a file I guess that it is not written correctly the 9 bit symbols because I am unable to decode that file. Although when file has only 8 bit symbols in it. Everything works fine. This is the way I am writing a file
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Same goes for reading with ReadAllText function call.
What is the way to go here?
I am using ZXing library to encode my file using RS encoder.
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
string contentWithParity = (ASCIIEncoding.Default.GetString(bytes.ToArray()));
WriteBackContent += contentWithParity;
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Like in the code I am initializing my Encoder with AZTEC_DATA_12 which means 12 bit symbol. Because RS Encoder requires int array so I am converting it to int array. And writing to file like here.But it works well with AZTEC_DATA_8 beacue of 8 bit symbol but not with AZTEC_DATA_12.
Main problem is here:
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
You are basically throwing away part of the result when converting the single integers to single bytes.
If you look at the array after the call to encode(), you can see that some of the array elements have a value higher than 255, so they cannot be represented as bytes. However, in your code quoted above, you cast every single element in the integer array to byte, changing the element when it has a value greater than 255.
So to store the result of encode(), you have to convert the integer array to a byte array in a way that the values are not lost or modified.
In order to make this kind of conversion between byte arrays and integer arrays, you can use the function Buffer.BlockCopy(). An example on how to use this function is in this answer.
Use the samples from the answer and the one from the comment to the answer for both conversions: Turning a byte array to an integer array to pass to the encode() function and to turn the integer array returned from the encode() function back into a byte array.
Here are the sample codes from the linked answer:
// Convert byte array to integer array
byte[] result = new byte[intArray.Length * sizeof(int)];
Buffer.BlockCopy(intArray, 0, result, 0, result.Length);
// Convert integer array to byte array (with bugs fixed)
int bytesCount = byteArray.Length;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] result = new int[intsCount];
Buffer.BlockCopy(byteArray, 0, result, 0, byteArray.Length);
Now about storing the data into files: Do not turn the data into a string directly via Encoding.GetString(). Not all bit sequences are valid representations of characters in any given character set. So, converting a random sequence of random bytes into a string will sometimes fail.
Instead, either store/read the byte array directly into a file via File.WriteAllBytes() / File.ReadAllBytes() or use Convert.ToBase64() and Convert.FromBase64() to work with a base64 encoded string representation of the byte array.
Combined here is some sample code:
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
// Turn int array to byte array without loosing value
byte[] bytes = new byte[bytesAsInts.Length * sizeof(int)];
Buffer.BlockCopy(bytesAsInts, 0, bytes, 0, bytes.Length);
// Write to file
File.WriteAllBytes(outputFileName, bytes);
// Read from file
bytes = File.ReadAllBytes(outputFileName);
// Turn byte array to int array
int bytesCount = bytes.Length * 40;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] dataAsInts = new int[intsCount];
Buffer.BlockCopy(bytes, 0, dataAsInts, 0, bytes.Length);
// Decoding
ReedSolomonDecoder dec = new ReedSolomonDecoder(GenericGF.AZTEC_DATA_12);
dec.decode(dataAsInts, parity);

C# Convert IPv6 to ASN string

I am trying to convert IPv6 to ASN. I have been able to get two 64 bit pieces but I don't know how to put it together to get a single ASN as a string so that I may look it up in the database.
The code so far:
byte[] addrBytes = System.Net.IPAddress.Parse(ipv6Address).GetAddressBytes();
if (System.BitConverter.IsLittleEndian)
{
//little-endian machines store multi-byte integers with the
//least significant byte first. this is a problem, as integer
//values are sent over the network in big-endian mode. reversing
//the order of the bytes is a quick way to get the BitConverter
//methods to convert the byte arrays in big-endian mode.
System.Collections.Generic.List<byte> byteList = new System.Collections.Generic.List<byte>(addrBytes);
byteList.Reverse();
addrBytes = byteList.ToArray();
}
ulong addrWords1, addrWords2;
if (addrBytes.Length > 8)
{
addrWords1 = System.BitConverter.ToUInt64(addrBytes, 8);
addrWords2 = System.BitConverter.ToUInt64(addrBytes, 0);
}
else
{
addrWords1 = 0;
addrWords2 = System.BitConverter.ToUInt32(addrBytes, 0);
}
Can you please help put addrWords1 and addrWords2 together into a string which represents the ASN?
E.g. 2001:200:: should return ASN 42540528726795050063891204319802818560

C# private function, IncrementArray

Can someone please explain in layman's terms the workings of this C# code?
for (int pos = 0; pos < EncryptedData.Length; pos += AesKey.Length);
{
Array.Copy(incPKGFileKey, 0, PKGFileKeyConsec, pos, PKGFileKey.Length);
IncrementArray(ref incPKGFileKey, PKGFileKey.Length - 1);
}
private Boolean IncrementArray(ref byte[] sourceArray, int position)
{
if (sourceArray[position] == 0xFF)
{
if (position != 0)
{
if (IncrementArray(ref sourceArray, position - 1))
{
sourceArray[position] = 0x00;
return true;
}
else return false;
}
else return false;
}
else
{
sourceArray[position] += 1;
return true;
}
}
I'm trying to port an app to Ruby but I'm having trouble understanding how the IncrementArray function works.
IncrementArray increments all entries of a byte array, with any overflow being added to the previous index, unless it's index 0 already.
The entire thing looks like some kind of encryption or decryption code. You might want to look for additional hints on which algorithm is used, as this kind of code is usually not self-explaining.
It looks to me like a big-endian addition algorithm:
Let's say you've got a long (64 bit, 8 byte) number:
var bigNumber = 0x123456FFFFFFFF;
But for some reason, we've got it coming to us as a byte array in Big-endian format:
// Get the little endian byte array representation of the number:
// [0xff 0xff 0xff 0xff 0xff 0x56 0x34 0x12]
byte[] source = BitConverter.GetBytes(bigNumber);
// BigEndian-ify it by reversing the byte array
source = source.Reverse().ToArray();
So now you want to add one to this "number" in it's current form, while maintaining any carrys/overflows like you would in normal arithmetic:
// increment the least significant byte by one, respecting carry
// (as it's bigendian, the least significant byte will be the last one)
IncrementArray(ref source, source.Length-1);
// we'll re-little-endian-ify it so we can convert it back
source = source.Reverse().ToArray();
// now we convert the array back into a long
var bigNumberIncremented = BitConverter.ToInt64(source, 0);
// Outputs: "Before +1:123456FFFFFFFF"
Console.WriteLine("Before +1:" + bigNumber);
// Outputs: "After +1:12345700000000"
Console.WriteLine("After +1:" + bigNumberIncremented);

How to pass int[] from c# to c++ using shared memory

I'm trying to pass an array of integers from c# to c++ via a managed memory file. Text was easy enough to get working, but I'm out of my depths in the c++ environment, and am not sure how to adjust this for an array of integers.
On the c# side, I pass:
pView = LS.Core.Platforms.Windows.Win32.MapViewOfFile(
hMapFile, // Handle of the map object
LS.Core.Platforms.Windows.Win32.FileMapAccess.FILE_MAP_ALL_ACCESS, // Read and write access
0, // High-order DWORD of file offset
ViewOffset, // Low-order DWORD of file offset
ViewSize // Byte# to map to the view
);
byte[] bMessage2 = Encoding.Unicode.GetBytes(Message2 + '\0');
Marshal.Copy(bMessage2, 0, pView2, bMessage2.Length);
Here pView2 is the pointer to the memory mapped file.
On the c++ side, I call:
LPCWSTR pBuf;
pBuf = (LPCWSTR) MapViewOfFile(hMapFile, // handle to map object
FILE_MAP_ALL_ACCESS, // read/write permission
0,
0,
BUF_SIZE);
How would I change this to handle an array of integers instead? Thanks!
a) You can copy the int[] into a byte[]. You can use BitConverter.GetBytes for this or bit arithmetic (byte0 = (byte)(i >> 24); byte1 = (byte)(i >> 16); ...)
b) You can use unsafe code to bit-copy (blit) the int[] to the target byte[]
c) Maybe you can use Array.Copy. I think it can handle any blittable value type.
As per the comments I will elaborate on b):
int[] src = ...;
IntPtr target = ...;
var bytesToCopy = ...;
fixed(int* intPtr = src) {
var srcPtr = (byte*)intPtr;
var targetPtr = (byte*)target;
for(int i from 0 to bytesToCopy) {
targetPtr[i] = srcPtr[i];
}
}

Categories

Resources