Is it possible in C# .Net (3.5 and above) to copy a variable into a byte[] buffer without creating any garbage in the process?
For instance:
int variableToCopy = 9861;
byte[] buffer = new byte[1024];
byte[] bytes = BitConverter.GetBytes(variableToCopy);
Buffer.BlockCopy(bytes, 0, buffer, 0, 4);
float anotherVariableToCopy = 6743897.6377f;
bytes = BitConverter.GetBytes(anotherVariableToCopy);
Buffer.BlockCopy(bytes, 0, buffer, 4, sizeof(float));
...
creates the byte[] bytes intermediary object which becomes garbage (presuming a ref is no longer held to it)...
I wonder if using bitwise operators the variable can be copied directly into the buffer without creating the intermediary byte[]?
Use pointers is the best and the fastest way:
You can do this with any number of variables, there is no wasted memory, the fixed statement has a little overhead but it's too small
int v1 = 123;
float v2 = 253F;
byte[] buffer = new byte[1024];
fixed (byte* pbuffer = buffer)
{
//v1 is stored on the first 4 bytes of the buffer:
byte* scan = pbuffer;
*(int*)(scan) = v1;
scan += 4; //4 bytes per int
//v2 is stored on the second 4 bytes of the buffer:
*(float*)(scan) = v2;
scan += 4; //4 bytes per float
}
Why can't you just do:
byte[] buffer = BitConverter.GetBytes(variableToCopy);
Note that the array here is not an indirection into the storage for the original Int32, it is very much a copy.
You are perhaps worried that bytes in your example is equivalent to:
unsafe
{
byte* bytes = (byte*) &variableToCopy;
}
.. but I assure you that it is not; it is a byte by byte copy of the bytes in the source Int32.
EDIT:
Based on your edit, I think you want something like this (requires unsafe context):
public unsafe static void CopyBytes(int value, byte[] destination, int offset)
{
if (destination == null)
throw new ArgumentNullException("destination");
if (offset < 0 || (offset + sizeof(int) > destination.Length))
throw new ArgumentOutOfRangeException("offset");
fixed (byte* ptrToStart = destination)
{
*(int*)(ptrToStart + offset) = value;
}
}
Related
i really need your help to port this c# code to Delphi one :
public unsafe byte[] Encode(byte[] inputPcmSamples, int sampleLength, out int encodedLength)
{
if (disposed)
throw new ObjectDisposedException("OpusEncoder");
int frames = FrameCount(inputPcmSamples);
IntPtr encodedPtr;
byte[] encoded =new byte [MaxDataBytes];
int length = 0;
/* How this can be ported to delphi */
fixed (byte* benc = encoded)
{
encodedPtr = new IntPtr((void*)benc);
length = API.opus_encode(_encoder, inputPcmSamples, frames, encodedPtr, sampleLength);
}
encodedLength = length;
if (length < 0)
throw new Exception("Encoding failed - " + ((Errors)length).ToString());
return encoded;
}
The main code part that i'm looking for is :
fixed (byte* benc = encoded)
{
encodedPtr = new IntPtr((void*)benc);
/* API.opus_encode = is declared in an other Class */
length = API.opus_encode(_encoder, inputPcmSamples, frames, encodedPtr, sampleLength);
}
many thanks
You seem to want to know how to deal with the fixed block in the C#.
byte[] encoded =new byte [MaxDataBytes];
....
fixed (byte* benc = encoded)
{
encodedPtr = new IntPtr((void*)benc);
length = API.opus_encode(_encoder, inputPcmSamples, frames, encodedPtr, sampleLength);
}
This use of fixed is to pin the managed array to obtain a pointer to be passed to the unmanaged code.
In Delphi we just want an array of bytes, and a pointer to that array. That would look like this:
var
encoded: TBytes;
....
SetLength(encoded, MaxDataBytes);
....
length := opus_encode(..., Pointer(encoded), ...);
or perhaps:
length := opus_encode(..., PByte(encoded), ...);
or perhaps:
length := opus_encode(..., #encoded[0], ...);
depending on how you declared the imported function opus_encode and your preferences.
If MaxDataBytes was a small enough value for the buffer to live on the stack, and MaxDataBytes was known at compile time, then a fixed length array could be used.
I am doing some data chunking and I'm seeing an interesting issue when sending binary data in my response. I can confirm that the length of the byte array is below my data limit of 4 megabytes, but when I receive the message, it's total size is over 4 megabytes.
For the example below, I used the largest chunk size I could so I could illustrate the issue while still receiving a usable chunk.
The size of the binary data is 3,040,870 on the service side and the client (once the message is deserialized). However, I can also confirm that the byte array is actually just under 4 megabytes (this was done by actually copying the binary data from the message and pasting it into a text file).
So, is WCF causing these issues and, if so, is there anything I can do to prevent it? If not, what might be causing this inflation on my side?
Thanks!
The usual way of sending byte[]s in SOAP messages is to base64-encode the data. This encoding takes 33% more space than binary encoding, which accounts for the size difference almost precisely.
You could adjust the max size or chunk size slightly so that the end result is within the right range, or use another encoding, e.g. MTOM, to eliminate this 33% overhead.
If you're stuck with soap, you can offset the buffer overhead Tim S. talked about using the System.IO.Compression library in .Net - You'd use the compress function first, before building and sending the soap message.
You'd compress with this:
public static byte[] Compress(byte[] data)
{
MemoryStream ms = new MemoryStream();
DeflateStream ds = new DeflateStream(ms, CompressionMode.Compress);
ds.Write(data, 0, data.Length);
ds.Flush();
ds.Close();
return ms.ToArray();
}
On the receiving end, you'd use this to decompress:
public static byte[] Decompress(byte[] data)
{
const int BUFFER_SIZE = 256;
byte[] tempArray = new byte[BUFFER_SIZE];
List<byte[]> tempList = new List<byte[]>();
int count = 0;
int length = 0;
MemoryStream ms = new MemoryStream(data);
DeflateStream ds = new DeflateStream(ms, CompressionMode.Decompress);
while ((InlineAssignHelper(count, ds.Read(tempArray, 0, BUFFER_SIZE))) > 0) {
if (count == BUFFER_SIZE) {
tempList.Add(tempArray);
tempArray = new byte[BUFFER_SIZE];
} else {
byte[] temp = new byte[count];
Array.Copy(tempArray, 0, temp, 0, count);
tempList.Add(temp);
}
length += count;
}
byte[] retVal = new byte[length];
count = 0;
foreach (byte[] temp in tempList) {
Array.Copy(temp, 0, retVal, count, temp.Length);
count += temp.Length;
}
return retVal;
}
I tried writing structures to a binary file but I can't read them correctly.
This is my struct. It has a dynamic number of "values". If the number of values is 3, then GetSize() will return 8 + (8*3) = 32
[StructLayout (LayoutKind.Sequential)]
public struct Sample
{
public long timestamp;
public double[] values;
public int GetSize()
{
return sizeof(long) + sizeof(double) * values.Length;
}
}
First, I convert the structure to bytes by:
public static byte[] SampleToBytes(Sample samp)
{
int size = samp.GetSize();
byte[] arr = new byte[size];
IntPtr ptr = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(samp, ptr, true);
Marshal.Copy(ptr, arr, 0, size);
Marshal.FreeHGlobal(ptr);
return arr;
}
Then, I write the bytes using BinaryWriter, and exit.
When I have to run the program again and read the file I saved. I use BinaryReader. I get every 32 bytes from the file and convert each array of 32 bytes back to struct using:
public static Sample BytesToSample(byte[] arr)
{
Sample samp = new Sample();
IntPtr ptr = Marshal.AllocHGlobal(32);
Marshal.Copy(arr, 0, ptr, 32);
samp = (Sample)Marshal.PtrToStructure(ptr, samp.GetType());
Marshal.FreeHGlobal(ptr);
return samp;
}
However, a SafeArrayTypeMismatchException occurs at PtrToStructure().
Could anyone please tell me what I am doing wrong?
Thanks.
You do realize that double[], being an array type, is a reference type? The struct holds the long followed by a reference to the heap object.
That would be the reason why it doesn't work, I think.
I believe you should simply write the elements of your array to a binary writer.
I have a client-server application where the server transmits a 4-byte integer specifying how large the next transmission is going to be. When I read the 4-byte integer on the client side (specifying FILE_SIZE), the next time I read the stream I get FILE_SIZE + 4 bytes read.
Do I need to specify the offset to 4 when reading from this stream, or is there a way to automatically advance the NetworkStream so my offset can always be 0?
SERVER
NetworkStream theStream = theClient.getStream();
//...
//Calculate file size with FileInfo and put into byte[] size
//...
theStream.Write(size, 0, size.Length);
theStream.Flush();
CLIENT
NetworkStream theStream = theClient.getStream();
//read size
byte[] size = new byte[4];
int bytesRead = theStream.Read(size, 0, 4);
...
//read content
byte[] content = new byte[4096];
bytesRead = theStream.Read(content, 0, 4096);
Console.WriteLine(bytesRead); // <-- Prints filesize + 4
Right; found it; FileInfo.Length is a long; your call to:
binWrite.Write(fileInfo.Length);
writes 8 bytes, little-endian. You then read that back via:
filesize = binRead.ReadInt32();
which little-endian will give you the same value (for 32 bits, at least). You have 4 00 bytes left unused in the stream, though (from the high-bytes of the long) - hence the 4 byte mismatch.
Use one of:
binWrite.Write((int)fileInfo.Length);
filesize = binRead.ReadInt64();
NetworkStream certainly advances, but in both cases, your read is unreliable; a classic "read known amount of content" would be:
static void ReadAll(Stream source, byte[] buffer, int bytes) {
if(bytes > buffer.Length) throw new ArgumentOutOfRangeException("bytes");
int bytesRead, offset = 0;
while(bytes > 0 && (bytesRead = source.Reader(buffer, offset, bytes)) > 0) {
offset += bytesRead;
bytes -= bytesRead;
}
if(bytes != 0) throw new EndOfStreamException();
}
with:
ReadAll(theStream, size, 4);
...
ReadAll(theStream, content, contentLength);
note also that you need to be careful with endianness when parsing the length-prefix.
I suspect you simply aren't reading the complete data.
Is there a way of mapping data collected on a stream or array to a data structure or vice-versa?
In C++ this would simply be a matter of casting a pointer to the stream as a data type I want to use (or vice-versa for the reverse)
eg: in C++
Mystruct * pMyStrct = (Mystruct*)&SomeDataStream;
pMyStrct->Item1 = 25;
int iReadData = pMyStrct->Item2;
obviously the C++ way is pretty unsafe unless you are sure of the quality of the stream data when reading incoming data, but for outgoing data is super quick and easy.
Most people use .NET serialization (there is faster binary and slower XML formatter, they both depend on reflection and are version tolerant to certain degree)
However, if you want the fastest (unsafe) way - why not:
Writing:
YourStruct o = new YourStruct();
byte[] buffer = new byte[Marshal.SizeOf(typeof(YourStruct))];
GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
Marshal.StructureToPtr(o, handle.AddrOfPinnedObject(), false);
handle.Free();
Reading:
handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
o = (YourStruct)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(YourStruct));
handle.Free();
In case lubos hasko's answer was not unsafe enough, there is also the really unsafe way, using
pointers in C#. Here's some tips and pitfalls I've run into:
using System;
using System.Runtime.InteropServices;
using System.IO;
using System.Diagnostics;
// Use LayoutKind.Sequential to prevent the CLR from reordering your fields.
[StructLayout(LayoutKind.Sequential)]
unsafe struct MeshDesc
{
public byte NameLen;
// Here fixed means store the array by value, like in C,
// though C# exposes access to Name as a char*.
// fixed also requires 'unsafe' on the struct definition.
public fixed char Name[16];
// You can include other structs like in C as well.
public Matrix Transform;
public uint VertexCount;
// But not both, you can't store an array of structs.
//public fixed Vector Vertices[512];
}
[StructLayout(LayoutKind.Sequential)]
unsafe struct Matrix
{
public fixed float M[16];
}
// This is how you do unions
[StructLayout(LayoutKind.Explicit)]
unsafe struct Vector
{
[FieldOffset(0)]
public fixed float Items[16];
[FieldOffset(0)]
public float X;
[FieldOffset(4)]
public float Y;
[FieldOffset(8)]
public float Z;
}
class Program
{
unsafe static void Main(string[] args)
{
var mesh = new MeshDesc();
var buffer = new byte[Marshal.SizeOf(mesh)];
// Set where NameLen will be read from.
buffer[0] = 12;
// Use Buffer.BlockCopy to raw copy data across arrays of primitives.
// Note we copy to offset 2 here: char's have alignment of 2, so there is
// a padding byte after NameLen: just like in C.
Buffer.BlockCopy("Hello!".ToCharArray(), 0, buffer, 2, 12);
// Copy data to struct
Read(buffer, out mesh);
// Print the Name we wrote above:
var name = new char[mesh.NameLen];
// Use Marsal.Copy to copy between arrays and pointers to arrays.
unsafe { Marshal.Copy((IntPtr)mesh.Name, name, 0, mesh.NameLen); }
// Note you can also use the String.String(char*) overloads
Console.WriteLine("Name: " + new string(name));
// If Erik Myers likes it...
mesh.VertexCount = 4711;
// Copy data from struct:
// MeshDesc is a struct, and is on the stack, so it's
// memory is effectively pinned by the stack pointer.
// This means '&' is sufficient to get a pointer.
Write(&mesh, buffer);
// Watch for alignment again, and note you have endianess to worry about...
int vc = buffer[100] | (buffer[101] << 8) | (buffer[102] << 16) | (buffer[103] << 24);
Console.WriteLine("VertexCount = " + vc);
}
unsafe static void Write(MeshDesc* pMesh, byte[] buffer)
{
// But byte[] is on the heap, and therefore needs
// to be flagged as pinned so the GC won't try to move it
// from under you - this can be done most efficiently with
// 'fixed', but can also be done with GCHandleType.Pinned.
fixed (byte* pBuffer = buffer)
*(MeshDesc*)pBuffer = *pMesh;
}
unsafe static void Read(byte[] buffer, out MeshDesc mesh)
{
fixed (byte* pBuffer = buffer)
mesh = *(MeshDesc*)pBuffer;
}
}
if its .net on both sides:
think you should use binary serialization and send the byte[] result.
trusting your struct to be fully blittable can be trouble.
you will pay in some overhead (both cpu and network) but will be safe.
If you need to populate each member variable by hand you can generalize it a bit as far as the primitives are concerned by using FormatterServices to retrieve in order the list of variable types associated with an object. I've had to do this in a project where I had a lot of different message types coming off the stream and I definitely didn't want to write the serializer/deserializer for each message.
Here's the code I used to generalize the deserialization from a byte[].
public virtual bool SetMessageBytes(byte[] message)
{
MemberInfo[] members = FormatterServices.GetSerializableMembers(this.GetType());
object[] values = FormatterServices.GetObjectData(this, members);
int j = 0;
for (int i = 0; i < members.Length; i++)
{
string[] var = members[i].ToString().Split(new char[] { ' ' });
switch (var[0])
{
case "UInt32":
values[i] = (UInt32)((message[j] << 24) + (message[j + 1] << 16) + (message[j + 2] << 8) + message[j + 3]);
j += 4;
break;
case "UInt16":
values[i] = (UInt16)((message[j] << 8) + message[j + 1]);
j += 2;
break;
case "Byte":
values[i] = (byte)message[j++];
break;
case "UInt32[]":
if (values[i] != null)
{
int len = ((UInt32[])values[i]).Length;
byte[] b = new byte[len * 4];
Array.Copy(message, j, b, 0, len * 4);
Array.Copy(Utilities.ByteArrayToUInt32Array(b), (UInt32[])values[i], len);
j += len * 4;
}
break;
case "Byte[]":
if (values[i] != null)
{
int len = ((byte[])values[i]).Length;
Array.Copy(message, j, (byte[])(values[i]), 0, len);
j += len;
}
break;
default:
throw new Exception("ByteExtractable::SetMessageBytes Unsupported Type: " + var[1] + " is of type " + var[0]);
}
}
FormatterServices.PopulateObjectMembers(this, members, values);
return true;
}