I'm working on a protocol which will transfer block of xml data via tcp socket. Now say I need to read all the bytes from a xml file and build a memory buffer. Then before sending the actual data bytes I need to send one header to other peer end. Say my protocol using below header type.
MessageID=100,Size=232,CRC=190
string strHeader = "100,232,190"
Now I would like to know how can I make this header length fixed (fixed header length is required for other peer to identify it as a header) for any amount of xml data. Currently say I'm having a xml file sized 283637bytes, so the message header will look like.
string strHeader = "100,283637,190"
How can I make it generic for any size of data? The code is being written both in c++ and c#.
There are a number of ways to do it.
Fixed Length
You can pad the numbers numbers with leading zeroes so you know exactly what length of the text you need to work with. 000100,000232,000190
Use Bytes instead of strings
If you are using integers, you can read the bytes as integers instead of manipulating the string. Look into the BinaryReader class. If needing to do this on the C++ side, the concept is still the same. I am sure there many ways to convert 4 bytes into an int.
Specify the length at the beginning
Usually when working with dynamic length strings. There is an indicator of how many bytes need to be read in order to get the entire string. You could specify the first 4 bytes of your message as the length of your string and then read up to that point.
The best approach for you is to implement this as a struct like
struct typedef _msg_hdr {
int messageID;
int size;
int crc;
}msg_hdr;
This will always have 12 bytes length. Now when sending your message, first send header to the receiver. The receiver should receive it in the same structure. This is the best and easiest way
Related
I'm writing something that convert data to byte[], transferring through internet, then convert back to what they were for a Unity game project.
I use BitConverter to convert int, float, etc., as the following example shows:
float aFloat = 312321f;
var bytes = BitConverter.GetBytes(aFloat);
if (BitConverter.IsLittleEndian) Array.Reverse(bytes);
// sending through the internet
byte[] bytes = GetByteArrayFromTheInternet();
if (BitConverter.IsLittleEndian) Array.Reverse(bytes);
float aFloat = BitConverter.ToSingle(bytes, 0);
I do the endianness check before and after sending the data to make sure they're the same. Do I need to do this for string?
string aString = "testing";
var bytes = System.Text.Encoding.Unicode.GetBytes(aString);
// if (BitConverter.IsLittleEndian) Array.Reverse(bytes); // Do I need this line?
// sending through the internet
byte[] bytes = GetByteArrayFromTheInternet();
// if (BitConverter.IsLittleEndian) Array.Reverse(bytes); // Do I need this too?
string aString = System.Text.Encoding.Unicode.GetString(bytes);
Thanks in advance!
I do the endianess check before and after sending the data to make sure they're the same. Do I need to do this for string?
That depends on who you're talking to on the network. What endianness are they using?
In your first example, you are assuming that the network protocol always sends float types (32-bit floating point) as big-endian. Which is fine; traditionally, "network host order" has always been big-endian, so it's a good choice for a network protocol.
But there's no requirement that a network protocol comply with that, nor that it be internally self-consistent, and you haven't provided any information about what protocol you're implementing.
Note: by "network protocol", I'm referring to the application-level protocol. This would be something like HTTP, SMTP, FTP, POP, etc. I.e. whatever your application chooses for the format of bytes on the network.
So, you'll have to consult the specification of the protocol you're using to find out what endianness the Unicode-encoded (UTF16) data uses. I would guess that it's big-endian, since your float values are too. But I can't say that for sure.
Note that if the network protocol does encode text as big-endian UTF16, then you don't need to swap the bytes for each character yourself. Just use the BigEndianUnicode encoding object to encode and decode the text. It will handle the endianness for you.
Note also that it's not really optional to use the right encoder. All that checking the BitConverter.IsLittleEndian field tells you is the current CPU architecture. But if the text on the network protocol is encoded as big-endian, then even if you are running on a big-endian CPU, you still need to use the BigEndianUnicode encoding. Just like that one will always reliably decode big-endian text, the Unicode encoding always decodes the text as if it's little-endian, even when running on a big-endian CPU.
I'm reading up on the ProtectedMemory class in C# (which uses the Data Protection API in Windows (DPAPI)) and I see that in order to use the Protect() Method of the class, the data to be encrypted must be stored in a byte array whose size/length is a multiple of 16.
I know how to convert many different data types to byte array form and back again, but how can I guarantee that the size of a byte array is a multiple of 16? Do I literally need to create an array whose size is a multiple of 16 and keep track of the original data's length using another variable or am I missing something? With traditional block-ciphers all of these details are handled for you automatically with padding settings. Likewise, when I attempt to convert data back to its original form from a byte array, how do I ensure that any additional bytes are ignored, assuming of course that the original data wasn't a multiple of 16.
In the code sample provided in the .NET Framework documentation, the byte array utilised just so happens to be 16 bytes long so I'm not sure what best practice is in relation to this hence the question.
Yes, just to iterate over the possibilities given in the comments (and give an answer to this nice question), you can use:
a padding method that is also used for block cipher modes, see all the options on the Wikipedia page on the subject.
prefix a length in some form or other. A fixed size of 32 bits / 4 bytes is probably easiest. Do write down the type of encoding for the size (unsigned, little endian is probably best for C#).
Both of these already operate on bytes, so you may need to define a character encoding such as UTF-8 if you use a string.
You could also use a specific encoding of the string, e.g. one defined by ASN.1 / DER and then perform zero padding. That way you can even indicate the type of the data that has been encoded in a platform independent way. You may want to read up on masochism before taking this route.
Good morning,
I'm new to network programming but have been doing research and got the basics of setting up a server/client application. I would like to send binary data via TCP from the server to the client to parse and print out integers based on certain field lengths.
I'm basically creating a dummy server to send network data and would like for my client to parse it.
My idea is to create a byte array: byte[] data = {1,0,0,0,1,0,0,1) to represent 8 bytes being set. For example, the client would read the first 2 bytes and print a 2 followed by the next 6 bytes and print a 9.
This is a simple example. The byte array I would like to send would be 864 bytes. I would parse the first 96,48,48 etc.
Would this be a good way of doing this? If not, how should I send 1s and 0s? I found many example sending strings but I would like to send binary data.
Thanks.
You seem to be confusing bits and bytes.
A byte is composed of 8 bits, which can represent integer values from 0 to 255.
So, Instead of sending {1,0,0,0,1,0,0,1}, splitting the byte array and parsing the bytes as bits to get 2 and 9, you could simply create your array as:
byte[] data={2,9};
To send other primitive data types(int,long,float,double...), you can convert them to a byte array.
int x=96;
byte[] data=BitConverter.GetBytes(x);
The byte array can then be written into stream as
stream.Write(data,0,data.Length);
On the client side, parse the byte arrays as:
int x=BitConverter.ToInt32(data,startIndex);
MSDN has great references on TCP clients and listeners.
https://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient(v=vs.110).aspx
I have a c# library which practically starts listening on a tcpip server an accepts a buffer of a certain size.
I need to send this packet as Byte array from php over the socket in a form of byte array or equivalent.
The packet is constructed for example byte[1] (a flag) is a number from 0 to 255 and byte[6] to byte[11] contains a float number in a string fromat for example:
005.70 which takes 6 bytes representing every character.
I managed to send the flag but when i try to send the float number it does not convert on the other side (C#).
So my question how can i send a byte array to c# using php?
From the C# part the conversion is being handled as follows:
float.Parse(System.Text.Encoding.Default.GetString(Data, 6, 6));
Just after i have posted the question i have dictated my answer. I am not 100% sure if this is the right way but it managed to convert correctly.
Here is the answer:
I created an array of characters and escaped the flag (4) to be the actual byte value being (4) but i didn't escape the money value
$string = array (0=>"\0", 1=>"\4", 2=>"\0", 3=>"\0", 4=>"\0", 5=>"\0", 6=>"5", 7=>".", 8=>"7", 9=>"\0", 10=>"\0");
Imploded all together with nothing as glue:
$arrByte = implode("", $string);
and sent over the opened socket:
$success = #fwrite($this->socket, $arrByte);
i have data structure i am passing this from server to client using data contract.
the data structure is
class datatransfer
{
double m_value1;
double m_value2;
double m_value3;
};
in the client i want to write into file.
The idea is to convert the values of the data transfer into string using string builder
than transfer the string to the client.
or
send the data structure and write the file using stream writer
which is the best approach? converting to string or send the datastructure and write to the file?
Reason for the question: to avoid the generation of string.
double size is 8 bytes. If i convert it to string what will be the size allocated .
It depends entirely on how you format the String representation of the Double. There is no pre-defined sizing guideline.
For example:
var myDouble = 5.183498029834092834D;
var shortString = myDouble.ToString("#.00"); // 5.18, uses 8 bytes
var longerString = myDouble.ToString("#.0000000"); // 5.1834980 18 bytes
Note that the sizes are the result of a Char being 2 bytes on my system.
What you're really trying to do is called serialization. There's a number of ways to do this. The simplest of which may be to just decorate your class with the [Serializable] attribute:
[Serializable]
public class datatransfer {
double m_value1;
double m_value2;
double m_value3;
}
You'll need to make your member variables public, or provide publicly accessible properties for setting the member variables. Otherwise they will not be serialized using this approach.
The size of a double when converted to string depends on the number itself and the encoding of the string.
Just as example, assuming ANSI encoding, the number 1 will need 1 byte, the number 1.123 will need 5 bytes. Moreover if you transmit that as text you need to consider more bytes used as delimiters (for example, using a space, you'll need N - 1 extra bytes). You should always transfer data in binary (if possible but it may depend on the type of connection you have and the protocol you have to use).
As a general rule you should think that binary data is smaller (then faster to transfer) but it's not easy to debug and you may have problems with a firewall. Text data is larger, verbose, you have to validate them on client side (your structure, XML for example, may be corrupted) and then slower to transfer. Big advantages are it's more easy to debug and whatever connection/protocol you may have usually you can transfer text (but don't forget you can transfer binary data encoded as text).
So this is not a definitive answer, what kind of data transfer method/framework/protocol you're using? A WCF service? .NET Remoting? Custom TCP/IP connection? If data structure is not too big you may find that binary serialization is a very good solution