Problem with code converted from C++ to C# - c#

I converted some code from a C++ application I wrote a long time ago to C#. In C++ I had a library I used that was a bit buffer, but my lack of C# knowledge has somewhat complicated the conversion.
When I query my application, and I simply use a ByteWriter without casting any values properly (just like bf.Write(-1) and bf.Write("stringhere") the query programs atleast query it, just get the wrong information. When I cast the values properly (to long, byte, short, etc) it completely breaks, and the query application doesn't even see it anymore.
C++ Code Snippet
void PlayerManager::BuildReplyInfo()
{
// Delete the old packet
g_ReplyInfo.Reset();
g_ReplyInfo.WriteLong(-1);
g_ReplyInfo.WriteByte(73);
g_ReplyInfo.WriteByte(g_ProtocolVersion.GetInt());
g_ReplyInfo.WriteString(iserver->GetName());
g_ReplyInfo.WriteString(iserver->GetMapName());
g_ReplyInfo.WriteString(gGameType);
}
C# Code
public static byte[] ConvertStringToByteArray(string str)
{
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
return encoding.GetBytes(str);
}
//-----------------------------------
while (true)
{
data = new byte[1024];
recv = socket.ReceiveFrom(data, ref Remote);
Console.WriteLine("Message length is " + recv);
// If the length is 25 and the 5th byte is 'T' it is a A2S_INFO QUERY
if (recv == 25 && data[4] == 84)
{
Console.WriteLine("Source Engine Query!");
data = BuildReplyInformation();
socket.SendTo(data, 0, data.Length, SocketFlags.None, Remote);
}
}
}
public static byte[] BuildReplyInformation()
{
MemoryStream stream = new MemoryStream();
BinaryWriter writer = new BinaryWriter(stream);
writer.Write((long)(-1));
writer.Write((byte)(73)); // Steam Version
writer.Write((byte)(15)); // Protocol
writer.Write(ConvertStringToByteArray("Minecraft Server\0")); // Hostname
writer.Write(ConvertStringToByteArray("Map Name\0")); // Map Name
writer.Write(ConvertStringToByteArray("tf\0")); // Game Directory
writer.Write(ConvertStringToByteArray("Minecraft Server\0")); // Game Description
writer.Write((short)(440));
writer.Write((byte)(15)); // Players
writer.Write((byte)(32)); // Max Players
writer.Write((byte)(0)); // Bots
writer.Write((byte)(100));
writer.Write((byte)(119)); // 108 Linux, 119 Windows
writer.Write((byte)(0)); // Password Boolean
writer.Write((byte)(01)); // Vac Secured
writer.Write(ConvertStringToByteArray("1.1.3.7\0"));
return stream.ToArray();
}

A couple of ideas that might get you on track:
Are you sure you need UTF8 as string encoding?
When you look at the array and compare it to the intended structure, are you able to find out at what point the array does not comply to the standard?

Just a few things to keep in mind:
UTF-8 strings sometimes start with a BOM (byte order mark), sometimes not.
Strings sometimes are serialized length prefixed, sometimes null-terminated.
My suggestion is to double-check the original C++ method WriteString(...) to find out how it behaves with respect to #1 and #2, and then to double-check the C# method GetBytes(...) for the same. If I recall, the .NET binary serializer writes length-prefixed strings for each string written, but the UTF8 encoder does not (and does not output a null character either). The UTF8 encoder may also (depending on how you use it?) output a BOM.
Also, I'm suspicious of how \0 might be written out when passing through the UTF8 encoder. You might (for kicks) try outputting the null marker separately from the string content, as just a 0-valued byte.

Long size in C# was different from C++, resolved the issue.

Related

Stuck on second loop through at the reading part of the stream

public static void RemoteDesktopFunction()
{
Task.Run(async() =>
{
while (!ClientSession.noConnection && data != "§Close§")
{
byte[] frameBytes = ScreenShotToByteArray();
byte[] buffer = new byte[900];
using (MemoryStream byteStream = new MemoryStream())
{
await byteStream.WriteAsync(frameBytes, 0, frameBytes.Length);
byteStream.Seek(0, SeekOrigin.Begin);
for (int i = 0; i <= frameBytes.Length; i+= buffer.Length)
{
await byteStream.ReadAsync(buffer, i, buffer.Length);
await ClientSession.SendData(Encoding.UTF8.GetString(buffer).Trim('\0')+ "§RemoteDesktop§");
}
await ClientSession.SendData("§RemoteDesktopFrameDone§§RemoteDesktop§");
};
}
});
}
I'm trying to add a remoteDesktop function to my program by passing chunks of bytes that are read from the byte stream. frameBytes.length is about 20,000b in the debugger. And the chunk is 900b. I expected it to read through and send chunks of data from the frameBytes array to a network stream. But it got stuck on :
await byteStream.ReadAsync(buffer, i, buffer.Length);
On the second loopthrough...
What could cause the issue?
There is no obvious reason why this code should hand on ReadAsync. But an obvious problem is that you are not using the return value that tells you how many bytes are actually read. So the last 'chunk' will likely have a bunch of invalid data from the last chunk at the end.
Note that there is really no reason to use async variants to read/write 900 bytes fro/to a memory stream. Async is mostly meant to hide IO latency, and writing to memory is not an IO operation.
If the goal is to chunk a byte array you can just use the overload of GetString that takes a span.
var chunk = frameBytes.AsSpan().Slice(i, Math.Min(900, frameBytes.Length - i);
At least on any modern c# version, on older versions you can just use Buffer.BlockCopy, no need for a memory stream.
All this assumes your actual idea is sound. I know little about RDP, but it seems odd to convert a array of more or less random data to a string as if it was UTF8 encoded. Normally when sending binary data over a text protocol you would encode it as a base64 string, or possibly prefix it with a command that includes the length. I'm also not sure what the purpose of sending it in chunks is, what is the client supposed to do with 900bytes of screenshot? But again, I know little about RDP.

Any proper way of sending byte data to Unity3D from a C plugin?

Just a question of curiosity here.
When you write plugins for Unity on the iOS platform, the plugins have a limited native-to-managed callback functionality (from the plugin and then to Unity). Basically this documentation:
iOS plugin Unity documentation
states that the function signature you are able to call back to is this:
Only script methods that correspond to the following signature can be called from native code: function MethodName(message:string)
The signature defined in C looks like this:
void UnitySendMessage(const char* obj, const char* method, const char* msg);
So this pretty much means I can only send strings back to Unity.
Now in my plugin I'm using protobuf-net to serialize objects and send them back to unity to be deserialized. I have gotten this to work, but by a solution I feel is quite ugly and not very elegant at all:
Person* person = [[[[[Person builder] setId:123]
setName:#"Bob"]
setEmail:#"bob#example.com"] build];
NSData* data = [person data];
NSString *rawTest = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
UnitySendMessage("GameObject", "ReceiveProductRequestResponse", [rawTest cStringUsingEncoding:NSUTF8StringEncoding]);
Basically I simply encode the bytestream into a string. In Unity I then get the bytes of the string and deserialize from there:
System.Text.UTF8Encoding encoding=new System.Text.UTF8Encoding();
Byte[] bytes = encoding.GetBytes(message);
This does work. But is there really no other way of doing it? Perhaps someone have an idea of how it could be done in some alternative way?
Base-64 (or another similar base) is the correct way to do this; you cannot use an encoding here (such as UTF8) - an encoding is intended to transform:
arbitrary string <===encoding===> structured bytes
i.e. where the bytes have a defined structure; this is not the case with protobuf; what you want is:
arbitrary bytes <===transform===> structured string
and base-64 is the most convenient implementation of that in most cases. Strictly speaking, you can sometimes go a bit higher than 64, but you'd probably have to roll it manually - not pretty. Base-64 is well-understood and well-supported, making it a good choice. I don't know how you do that in C, but in Unity it should be just:
string s = Convert.ToBase64String(bytes);
Often, you can also avoid an extra buffer here, assuming you are serializing in-memory to a MemoryStream:
string s;
using(var ms = new MemoryStream()) {
// not shown: serialization steps
s = Convert.ToBase64String(ms.GetBuffer(), 0, (int)ms.Length);
}
Example based on Marc Gravell's answer:
On the ios side:
-(void)sendData:(NSData*)data
{
NSString* base64String = [data base64Encoding];
const char* utf8String = [base64String cStringUsingEncoding:NSUTF8StringEncoding];
UnitySendMessage("iOSNativeCommunicationManager", "dataReceived", utf8String);
}
and on the unity side:
public delegate void didReceivedData( byte[] data );
public static event didReceivedData didReceivedDataEvent;
public void dataReceived( string bytesString )
{
byte[] data = System.Convert.FromBase64String(bytesString);
if( didReceivedDataEvent != null )
didReceivedDataEvent(data);
}

Reading and writing data between c# and java sockets

I've been fighting with this all day. I need to send Strings (JSON) between a C# server and a Java client. I have to have the first 4 bytes (always 4 bytes) as the length of the message (the header so we know how long the rest of the message is) and then the body of the message. The streams stay open for the duration of the app's life. Personally I would have just delimited each message with a '\n' and then used readLine()'s but the client NEEDS it this way.
I need the C# side to send and receive these messages as well as the Java side. Not too sure how to encode and decode everything.
Some of the bits I've been playing with:
C# send
byte[] body = Encoding.ASCII.GetBytes(message);
byte[] header = BitConverter.GetBytes((long) body.Length);
foreach (byte t in header)
{
networkStream.WriteByte(t);
}
foreach (byte t in body)
{
networkStream.WriteByte(t);
}
I didn't get to C# receive yet. Java send:
byte[] dataToSend = data.getBytes();
byte[] header = ByteBuffer.allocate(4).putInt(dataToSend.length).array();
ByteArrayOutputStream output = new ByteArrayOutputStream();
output.write(header);
output.write(dataToSend);
output.writeTo(outputStream);
Java receive:
byte[] header = new byte[4];
int bytesRead;
do {
Debug.log("TCPClient- waiting for header...");
bytesRead = reader.read(header);
ByteBuffer bb = ByteBuffer.wrap(header);
int messageLength = bb.getInt();
Debug.log("TCPClient- header read. message length (" + messageLength + ")");
byte[] body = new byte[messageLength];
do {
bytesRead = reader.read(body);
}
while (reader.available() > 0 && bytesRead != -1);
}
while (reader.available() > 0 && bytesRead != -1);
I know the code is not all complete, but can anyone provide any assistance?
Delimiting messages with '\n' is not a good idea, unless you’re sure that your messages will only consist of single-line text. If a '\n' occurs innately within one of your messages, then that message would get split up.
You state that the message length must be exactly 4 bytes; however, the following line generates an 8-byte array:
byte[] header = BitConverter.GetBytes((long) body.Length);
The reason is that, in C#, long is an alias for the Int64 struct, representing a 64-bit signed integer. What you need is an int or a uint, which represents a 32-bit signed or unsigned integer. Thus, just change the above line to:
byte[] header = BitConverter.GetBytes(body.Length);
Another important consideration you need to make is the endianness of your data. Imagine you’re trying to create a 4-byte array for a value of, say, 7. On a big-endian platform, this would get encoded as 0,0,0,7; on a little-endian platform, it would get encoded as 7,0,0,0. However, if it gets decoded on a platform having the reverse endianness, the 7 would be interpreted as the most significant byte, giving a value of 117,440,512 (equal to 7×2563) rather than 7.
Thus, the endianness must be the same for both your Java and your C# applications. The Java ByteBuffer is big-endian by default; however, the C# BitConverter is architecture-dependant, and may be checked through its IsLittleEndian static property. You can get C# to always follow the big-endian convention by reversing your array when it is little-endian:
byte[] header = BitConverter.GetBytes(body.Length);
if (BitConverter.IsLittleEndian)
Array.Reverse(header);

zeromq - bytearray (from .net server) to string in java

I'm working on a project that uses both .net and java, using zeromq to communicate between them.
I can connect to the .net server, however when I try to convert the byte array to a string strange things happen. In the eclipse debugger I can see the string, and its length. When I click on the string its value changes to being only the first letter, and the length changes to 1. In the eclipse console when I try to copy and paste the output I only get the first letter. I also tried running it in NetBeans and get the same issue.
I thought it might be due to Endianness, so have tired both
BIG_ENDIAN
LITTLE_ENDIAN
Anyone know how I an get the full string, and not just the first letter?
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import org.zeromq.ZMQ;
class local_thr
{
private static final String ENDPOINT = "tcp://127.0.0.1:8000";
static String[] myargs={ENDPOINT, "1000", "100"};
public static void main (String [] args) {
args = myargs;
ZMQ.Context ctx = ZMQ.context (1);
ZMQ.Socket s = ctx.socket (ZMQ.SUB);
s.subscribe("".getBytes());
s.connect (ENDPOINT);
while(true){
byte [] data = s.recv (0);
ByteBuffer buf = ByteBuffer.wrap(data);
buf.order(ByteOrder.nativeOrder());
byte[] bytes = new byte[buf.remaining()];
buf.get(bytes, 0, bytes.length);
String quote;
quote = new String(bytes);
String myQuote;
myQuote = new String();
System.out.println (quote);
}
}
}
1 char suggests that the data is being encoded as little-endian UTF-16 and decoded as nul-terminated (could be expecting single-byte, could be expecting UTF-8).
Make sure you are familiar with encodings, and ensure that both ends of the pipe are using the same encoding.
The java string(byte[]) constructor uses the default system charset; I would start by investigating how to read UTF-16 from java. Or maybe use UTF-8 from both ends. Using a default charset is never robust.

Is there a BinaryReader in C++ to read data written from a BinaryWriter in C#?

I've written several ints, char[]s and the such to a data file with BinaryWriter in C#. Reading the file back in (in C#) with BinaryReader, I can recreate all of the pieces of the file perfectly.
However, attempting to read them back in with C++ yields some scary results. I was using fstream to attempt to read back the data and the data was not reading in correctly. In C++, I set up an fstream with ios::in|ios::binary|ios::ate and used seekg to target my location. I then read the next four bytes, which were written as the integer "16" (and reads correctly into C#). This reads as 1244780 in C++ (not the memory address, I checked). Why would this be? Is there an equivalent to BinaryReader in C++? I noticed it mentioned on msdn, but that's Visual C++ and intellisense doesn't even look like c++, to me.
Example code for writing the file (C#):
public static void OpenFile(string filename)
{
fs = new FileStream(filename, FileMode.Create);
w = new BinaryWriter(fs);
}
public static void WriteHeader()
{
w.Write('A');
w.Write('B');
}
public static byte[] RawSerialize(object structure)
{
Int32 size = Marshal.SizeOf(structure);
IntPtr buffer = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(structure, buffer, true);
byte[] data = new byte[size];
Marshal.Copy(buffer, data, 0, size);
Marshal.FreeHGlobal(buffer);
return data;
}
public static void WriteToFile(Structures.SomeData data)
{
byte[] buffer = Serializer.RawSerialize(data);
w.Write(buffer);
}
I'm not sure how I could show you the data file.
Example of reading the data back (C#):
BinaryReader reader = new BinaryReader(new FileStream("C://chris.dat", FileMode.Open));
char[] a = new char[2];
a = reader.ReadChars(2);
Int32 numberoffiles;
numberoffiles = reader.ReadInt32();
Console.Write("Reading: ");
Console.WriteLine(a);
Console.Write("NumberOfFiles: ");
Console.WriteLine(numberoffiles);
This I want to perform in c++. Initial attempt (fails at first integer):
fstream fin("C://datafile.dat", ios::in|ios::binary|ios::ate);
char *memblock = 0;
int size;
size = 0;
if (fin.is_open())
{
size = static_cast<int>(fin.tellg());
memblock = new char[static_cast<int>(size+1)];
memset(memblock, 0, static_cast<int>(size + 1));
fin.seekg(0, ios::beg);
fin.read(memblock, size);
fin.close();
if(!strncmp("AB", memblock, 2)){
printf("test. This works.");
}
fin.seekg(2); //read the stream starting from after the second byte.
int i;
fin >> i;
Edit: It seems that no matter what location I use "seekg" to, I receive the exact same value.
You realize that a char is 16 bits in C# rather than the 8 it usually is in C. This is because a char in C# is designed to handle Unicode text rather than raw data. Therefore, writing chars using the BinaryWriter will result in Unicode being written rather than raw bytes.
This may have lead you to calculate the offset of the integer incorrectly. I recommend you take a look at the file in a hex editor, and if you cannot work out the issue post the file and the code here.
EDIT1
Regarding your C++ code, do not use the >> operator to read from a binary stream. Use read() with the address of the int that you want to read to.
int i;
fin.read((char*)&i, sizeof(int));
EDIT2
Reading from a closed stream is also going to result in undefined behavior. You cannot call fin.close() and then still expect to be able to read from it.
This may or may not be related to the problem, but...
When you create the BinaryWriter, it defaults to writing chars in UTF-8. This means that some of them may be longer than one byte, throwing off your seeks.
You can avoid this by using the 2 argument constructor to specify the encoding. An instance of System.Text.ASCIIEncoding would be the same as what C/C++ use by default.
There are many thing going wrong in your C++ snippet. You shouldn't mix binary reading with formatted reading:
// The file is closed after this line. It is WRONG to read from a closed file.
fin.close();
if(!strncmp("AB", memblock, 2)){
printf("test. This works.");
}
fin.seekg(2); // You are moving the "get pointer" of a closed file
int i;
// Even if the file is opened, you should not mix formatted reading
// with binary reading. ">>" is just an operator for reading formatted data.
// In other words, it is for reading "text" and converting it to a
// variable of a specific data type.
fin >> i;
If it's any help, I went through how the BinaryWriter writes data here.
It's been a while but I'll quote it and hope it's accurate:
Int16 is written as 2 bytes and padded.
Int32 is written as Little Endian and zero padded
Floats are more complicated: it takes the float value and dereferences it, getting the memory address's contents which is a hexadecimal

Categories

Resources