I have the following struct:
unsafe struct Locomotive
{
public fixed char locotype[6];
public int roadno,HP;
}
I have successfully written this to a binary file. Here's the code:
Locomotive l1 = new Locomotive();
for (int i = 0; i <= 5; i++)
{
l1.locotype[i] = textBox1.Text[i];
}
l1.roadno = int.Parse(textBox2.Text);
l1.HP = int.Parse(textBox3.Text);
BinaryWriter bw = new BinaryWriter(File.Open(#"C:\Documents and Settings\Ruchir Sharma\Desktop\Locodata.bin", FileMode.Append));
IntPtr ip = Marshal.AllocHGlobal(Marshal.SizeOf(l1));
Marshal.StructureToPtr(l1, ip, true);
Byte[] b1 = new Byte[Marshal.SizeOf(l1)];
Marshal.Copy(ip, b1, 0, b1.Length - 1);
bw.Write(b1);
MessageBox.Show("Data written successfully");
Marshal.FreeHGlobal(ip);
bw.Close();
Now, while reading this struct, the character array i.e locotype[6] is giving me a problem. I tried the method BinaryReader.ReadChars(), but it didn't worked for me. Please help me on reading this struct.
Your "read" code should be the reverse of your "write" code. You didn't write it with WriteChars, so don't use ReadChars to read it. You should use ReadBytes then Marshal.Copy and PtrToStructure.
Frankly, though, this level of "unsafe" (fixed buffers in structs, Marshal, etc) is very rare and specialized - I worry you might be over-engineering this.
Related
How do I access a managed 2D array on the C++ side?
I know we are supposed to use pin_ptr<T> in order to access the managed array from the C++ side. It's straightforward with a 1D array, but with a 2D array I don't know how to use pin_ptr<T> correctly.
My code looks like this, where foo() will be called from the C# side:
nativeFunc(double **ptr);
foo(array<double,2> ^A)
{
const int len = A->GetLength(0);
double **ptr = (double**) alloca(sizeof(double*) * len);
for(int i = 0; i < len; i++)
{
pin_ptr<double> pinned = &A[i,0]; //Will go out of scope!
ptr[i] = pinned;
}
nativeFunc(ptr);
}
The problem is that my pin_ptr<T> will go out of scope too early, as it's located inside the loop body, and thus I think above code is NOT safe. But how can I avoid this? Apparently, it's not allowed to make an array of pin_ptr<T>, neither managed nor unmanaged. It also cannot be added to a std::vector and it cannot be made a class member either. So I'm kind of stuck here...
Thanks for any suggestions...
Okay, after some more digging, I found out that GCHandle::Alloc(x, GCHandleType::Pinned) may work as a more flexible replacement for pin_ptr<T> here. However, it seems we can only pin down the managed array as a whole. It does not seem to be possible to pin down a single sub-array (inner array) this way, like the pin_ptr<T> would do. Furthermore, by "try and error" method I have figured out that from the GCHandle handle I can get an unmanaged pointer via hdl.AddrOfPinnedObject().ToPointer() and that this one points to a continuous block of memory which contains the whole 2D array in a "flattened" (serialized) form. From here I can reconstruct the unmanaged 2D array, by using the proper base-pointer and stride. But is this considered a "safe" method and does it always work or is it even implementation specific ???
So I have hacked together a solution like this:
class ArrayPinHandlerRAII
{
public:
ArrayPinHandlerRAII(array<double,2> ^managedArray)
{
m_dimOuter = managedArray->GetLength(0);
m_dimInner = managedArray->GetLength(1);
m_handle = GCHandle::Alloc(managedArray, GCHandleType::Pinned);
m_ptr = new double*[m_dimOuter];
double *basePointer = reinterpret_cast<double*>
(m_handle.AddrOfPinnedObject().ToPointer());
for(size_t d = 0; d < m_dimOuter; d++)
{
m_ptr[d] = basePointer;
basePointer += m_dimInner;
}
}
~ArrayPinHandlerRAII(void)
{
delete [] m_ptr;
m_handle.Free();
}
inline double **data(void)
{
return m_ptr;
}
inline const size_t &dimOuter(void) const
{
return m_dimOuter;
}
inline const size_t &dimInner(void) const
{
return m_dimInner;
}
private:
GCHandle m_handle;
double **m_ptr;
size_t m_dimOuter;
size_t m_dimInner;
};
Any opinions? ;-)
Okay, one of the examples in MSDN has the following important info:
Pinning a sub-object defined in a managed object has the effect of pinning the entire object. For example, if any element of an array is pinned, then the whole array is also pinned. There are no extensions to the language for declaring a pinned array. To pin an array, declare a pinning pointer to its element type, and pin one of its elements.
So the code can actually be simplified to:
void nativeFunc(double **ptr);
void foo(array<double,2> ^A)
{
int dimOuter = managedArray->GetLength(0);
int dimInner = managedArray->GetLength(1);
pin_ptr<double> pinned = &A[i,0]; //This pins the *entire* array!
double **ptr = (double**) alloca(sizeof(double*) * dimOuter);
double *basePtr = pinned;
for(int i = 0; i < dimOuter; i++)
{
ptr[i] = basePtr;
basePtr += dimInner;
}
nativeFunc(ptr);
}
I'm trying to convert an object which I have in a byte[] to an object.
I've tried using this code I found online:
object byteArrayToObject(byte[] bytes)
{
try
{
MemoryStream ms = new MemoryStream(bytes);
BinaryFormatter bf = new BinaryFormatter();
//ms.Position = 0;
return bf.Deserialize(ms,null);
}
catch
{
return null;
}
}
SerializationException: "End of Stream encountered before parsing was
completed.".
I've tried it with the ms.Position = 0 line uncommented of course too...
bytes[] is only 8 bytes long, each byte isn't null.
Suggestions?
[edit]
The byte[] was written to a binary file from a c++ program using something along the lines of
void WriteToFile (std::ostream& file,T* value)
{
file.write(reinterpret_cast<char*>(value), sizeof(*T))
}
Where value may be a number of different types.
I can cast to some objects okay from the file using BitConverter, but anything BitConverter doesn't cover I can't do..
As was stated by cdhowie, you will need to manually deserialize the encoded data. Based on the limited information available, you may either want an array of objects or an object containing an array. It looks like you have a single long but there is no way to know from your code. You will need to recreate your object in its true form so take the below myLong as a simple example for a single long array. Since it was unspecified I'll assume you want a struct containing an array like:
public struct myLong {
public long[] value;
}
You could do the same thing with an array of structs, or classes with minor changes to the code posted below.
Your method will be something like this: (written in the editor)
private myLong byteArrayToObject(byte[] bytes) {
try
{
int len = sizeof(long);
myLong data = new myLong();
data.value = new long[bytes.Length / len];
int byteindex = 0;
for (int i = 0; i < data.value.Length; i++) {
data.value[i] = BitConverter.ToInt64(bytes,byteindex);
byteindex += len;
}
return data;
}
catch
{
return null;
}
}
I'm writing a library to simplify my network programming in future projects. I'm wanting it to be robust and efficient because this will be in nearly all of my projects in the future. (BTW both the server and the client will be using my library so I'm not assuming a protocol in my question) I'm writing a function for receiving strings from a network stream where I use 31 bytes of buffer and one for sentinel. The sentinel value will indicate which byte if any is the EOF. Here's my code for your use or scrutiny...
public string getString()
{
string returnme = "";
while (true)
{
int[] buff = new int[32];
for (int i = 0; i < 32; i++)
{
buff[i] = ns.ReadByte();
}
if (buff[31] > 31) { /*throw some error*/}
for (int i = 0; i < buff[31]; i++)
{
returnme += (char)buff[i];
}
if (buff[31] != 31)
{
break;
}
}
return returnme;
}
Edit: Is this the best (efficient, practical, etc) to accomplish what I'm doing.
Is this the best (efficient, practical, etc) to accomplish what I'm doing.
No. Firstly, you are limiting yourself to characters in the 0-255 code-point range, and that isn't enough, and secondly: serializing strings is a solved problem. Just use an Encoding, typically UTF-8. As part of a network stream, this probably means "encoode the length, encode the data" and "read the length, buffer that much data, decode the data". As another note: you aren't correctly handling the EOF scenario if ReadByte() returns a negative value.
As a small corollary, note that appending to a string in a loop is never a good idea; if you did do it that way, use a StringBuilder. But don't do it that way. My code would be something more like (hey, whadya know, here's my actual string-reading code from protobuf-net, simplified a bit):
// read the length
int bytes = (int)ReadUInt32Variant(false);
if (bytes == 0) return "";
// buffer that much data
if (available < bytes) Ensure(bytes, true);
// read the string
string s = encoding.GetString(ioBuffer, ioIndex, bytes);
// update the internal buffer data
available -= bytes;
position += bytes;
ioIndex += bytes;
return s;
As a final note, I would say: if you are sending structured messages, give some serious consideration to using a pre-rolled serialization API that specialises in this stuff. For example, you could then just do something like:
var msg = new MyMessage { Name = "abc", Value = 123, IsMagic = true };
Serializer.SerializeWithLengthPrefix(networkStream, msg);
and at the other end:
var msg = Serializer.DeserializeWithLengthPrefix<MyMessage>(networkStream);
Console.WriteLine(msg.Name); // etc
Job done.
I think tou should use a StringBuilder object with fixed size for better performance.
I have a program that needs to pass data from C++ to C# and back for processing. In order to do this, I have retrieved a structure, converted it into a byte array and then converted it back on the other end. However, when converting it back, the data is not correct, even though the memory dump shows that the values in memory for each variable are identical.
Here is the code to retrieve the value:
array<Byte> ^ GetPublicKeyBlob(String ^ ContainerName) {
const TCHAR * tContainer = context->marshal_as<const TCHAR*>(ContainerName);
HCRYPTPROV hProv = NULL;
CryptAcquireContext(&hProv, tContainer, MS_ENHANCED_PROV, PROV_RSA_FULL, CRYPT_MACHINE_KEYSET);
DWORD dwKeySize = 0;
CryptExportPublicKeyInfo(hProv, AT_SIGNATURE, X509_ASN_ENCODING, NULL, &dwKeySize);
PCERT_PUBLIC_KEY_INFO pbKey = (PCERT_PUBLIC_KEY_INFO)calloc(dwKeySize, sizeof(BYTE));
CryptExportPublicKeyInfo(hProv, AT_SIGNATURE, X509_ASN_ENCODING, (PCERT_PUBLIC_KEY_INFO)pbKey, &dwKeySize);
array<Byte> ^ retVal = gcnew array<Byte>(dwKeySize);
for(int i = 0; i < dwKeySize; i++)
retVal[i] = ((BYTE*)pbKey)[i];
free(pbKey);
return retVal;
}
Then on the other end, I change it back to a PCERT_PUBLIC_KEY_INFO structure with the following code:
BYTE * cpiBuffer = (BYTE*)calloc(_PublicKey->Length, sizeof(BYTE));
for(int i = 0; i < _PublicKey->Length; i++)
cpiBuffer[i] = _PublicKey[i];
PCERT_PUBLIC_KEY_INFO cpi = (PCERT_PUBLIC_KEY_INFO)cpiBuffer;
When looking at them in a memory dump, pbKey, retVal, _PublicKey, cpiBuffer and cpi all have the exact same values. But when looking at cpi as a structure, the Algorithm.pszObjId points to some erroneous memory location and when I try to use it in a function, it fails. What am I doing wrong here?
typedef struct _CRYPT_ALGORITHM_IDENTIFIER {
LPSTR pszObjId;
CRYPT_OBJID_BLOB Parameters;
} CRYPT_ALGORITHM_IDENTIFIER, *PCRYPT_ALGORITHM_IDENTIFIER;
As you can see, pszObjId is a pointer, it's contents are somewhere in memory. By casting the PCERT_PUBLIC_KEY_INFO structure to byte array you are only getting the value of the pointer, not what it points to.
On a side note, I'm not sure why are you marshaling as TCHAR*, if you want bytes then you should use char* or unsigned char*. If UNICODE is defined TCHAR will be wchar_t and that might make some difficulties.
I've seen here , and also googling for "marshal" several ways to convert a byte array to a struct.
But what I'm looking for is if there is a way to read an array of structs from a file (ok, whatever memory input) in one step?
I mean, load an array of structs from file normally takes more CPU time (a read per field using a BinaryReader) than IO time. Is there any workaround?
I'm trying to load about 400K structs from a file as fast as possible.
Thanks
pablo
Following URL may be of interest to you.
http://www.codeproject.com/KB/files/fastbinaryfileinput.aspx
Or otherwise I think of pseudo code like the following:
readbinarydata in a single shot and convert back to structure..
public struct YourStruct
{
public int First;
public long Second;
public double Third;
}
static unsafe byte[] YourStructToBytes( YourStruct s[], int arrayLen )
{
byte[] arr = new byte[ sizeof(YourStruct) * arrayLen ];
fixed( byte* parr = arr )
{
* ( (YourStruct * )parr) = s;
}
return arr;
}
static unsafe YourStruct[] BytesToYourStruct( byte[] arr, int arrayLen )
{
if( arr.Length < (sizeof(YourStruct)*arrayLen) )
throw new ArgumentException();
YourStruct s[];
fixed( byte* parr = arr )
{
s = * ((YourStruct * )parr);
}
return s;
}
Now you can read bytearray from the file in a single shot and convert back to strucure using BytesToYourStruct
Hope you can implement this idea and check...
I found a potential solution at this site -
http://www.eggheadcafe.com/software/aspnet/32846931/writingreading-an-array.aspx
It says basically to use Binary Formatter like this:
FileStream fs = new FileStream("DataFile.dat", FileMode.Create);
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(fs, somestruct);
I also found two questions from this site - Reading a C/C++ data structure in C# from a byte array
and
How to marshal an array of structs - (.Net/C# => C++)
I haven't done this before, being a C# .NET beginner myself. I hope this solution helps.