I'm stuck at the concept of TCP as a stream-oriented protocol.
While using Sockets or anything that uses that protocol there is no way to know how much data will be received in a single receive; so for example if we send 1024 bytes, it can be received in 3 Receive methods give or take.
So we basically need to keep calling our receive methods until the buffer hits the size sent.
I'm still struggling on network, I'm trying to get a bit comfortable using Client/Server applications. I found that the easiest way to do so is to create packets and send them over the network.
So my question is: When serializing a Class as a packet that has - lets say a typeof Normal.Message - and has a string that holds the Message...
[Serializable]
Class Message
{
String Msg;
public Message(String msg)
{
This.Msg = msg;
}
}
...if we sent this packet, is it undetermined that it will be received in one Receive Method? And if not, what's the easiest way to ensure that it will?
Related
in this qusetion C# Receiving Packet in System.Sockets
a guy asked:
"In my Client Server Application i wondered how to make a packet and send it to the server via the Client
Then on the server i recognize which packet is this and send the proper replay"
and showed the example of his way of implementing the 'packet recognizer'. He got an answer that his way of 'structuring the message' is bad, but no explanation and code example followed by the answer.
So please, can anybody show the example of a good code, which should do something like this, but proper way:
[client]
Send(Encoding.ASCII.GetBytes("1001:UN=user123&PW=123456")) //1001 is the ID
[server]
private void OnReceivePacket(byte[] arg1, Wrapper Client)
{
try
{
int ID;
string V = Encoding.ASCII.GetString(arg1).Split(':')[0];
int.TryParse(V, out ID);
switch (ID)
{
case 1001://Login Packet
AppendToRichEditControl("LOGIN PACKET RECEIVED");
break;
case 1002:
//OTHER IDs
break;
default:
break;
}
}
catch { }
}
TCP ensures that data arrives in the same order it was sent - but it doesn't have a concept of messages. For instance, say you send the following two fragments of data:
1001:UN=user123&PW=123456
999:UN=user456&PW=1234
On the receiving end, you will read 1001:UN=user123&PW=123456999:UN=user456&PW=1234 and this might take one, two or more reads. This may even arrive in two packets as:
1001:UN=user123&PW=12
3456999:UN=user456&PW=1234
This makes it very had to parse the message correctly. The other post mentions sending the length of a packet before the actual data, and that indeed solves the problem, as you can determine exactly when one message ends and the next starts.
As an example, client and server could agree that each message starts with 4 bytes containing the length of the message. The receiver could then simply:
load 4 bytes exactly
convert to an integer, now you know the length of the remainder of the message
read exactly that many bytes and parse the message
C# conveniently has the BitConverter class that allows you to convert the integer to a byte[] and vice versa.
My Context
I have a TCP networking program that sends large objects that have been serialized and encoded into base64 over a connection. I wrote a client library and a server library, and they both use NetworkStream's Begin/EndReadandBegin/EndWrite. Here's the (very much simplified version of the) code I'm using:
For the server:
var Server = new TcpServer(/* network stuffs */);
Server.Connect();
Server.OnClientConnect += new ClientConnectEventHandler(Server_OnClientConnect);
void Server_OnClientConnect()
{
LargeObject obj = CalculateLotsOfBoringStuff();
Server.Write(obj.SerializeAndEncodeBase64());
}
Then the client:
var Client = new TcpClient(/* more network stuffs */);
Client.Connect();
Client.OnMessageFromServer += new MessageEventHandler(Client_OnMessageFromServer);
void Client_OnMessageFromServer(MessageEventArgs mea)
{
DoSomethingWithLargeObject(mea.Data.DecodeBase64AndDeserialize());
}
The client library has a callback method for NetworkStream.BeginRead which triggers the event OnMessageFromServer that passes the data as a string through MessageEventArgs.
My Problem
When receiving large amounts of data through BeginRead/EndRead, however, it appears to be fragmented over multiple messages. E.G. pretend this is a long message:
"This is a really long message except not because it's for explanatory purposes."
If that really were a long message, Client_OnMessageFromServer might be called... say three times with fragmented parts of the "long message":
"This is a really long messa"
"ge except not because it's for explanatory purpos"
"es."
Soooooooo.... takes deep breath
What would be the best way to have everything sent through one Begin/EndWrite to be received in one call to Client_OnMessageFromServer?
You can't. On TCP, how things arrive is not necessarily the same as how they were sent. It the job of your code to know what constitutes a complete message, and if necessary to buffer incoming data until you have a complete message (taking care not to discard the start of the next message I the process).
In text protocols, this usually means "spot the newline / nul-char". For binary, it usually means "read the length-header in the preamble the the message".
TCP is a stream protocol, and has no fixed message boundaries. This means you can receive part of a message or the end of one and the beginning of another.
There are two ways to solve this:
Alter your protocol to add end-of-message markers. This way you continuously receive until you find the special marker. This can however lead that you have a buffer containing the end of one message and the beginning of another which is why I recommend the next way.
Alter protocol to first send the length of the message. Then you will know exactly how long the message is, and can count down while receiving so you won't read the beginning of the next message.
I am working on a packet system (UDP or TCP, either way is fine) but I came to the conclusion that my current (TCP) system is really awefull.
Basicly, I'm sending a packet that looks like this:
string packet = "LOGIN_AUTH;TRUE"
IPAddress target = IPAddress.Parse(endPoint);
// Send packet
When I recieve the packet in my client, I use this to determine the packet type:
string[] splitPacket = packet.Split(';');
if (splitPacket[0] == "LOGIN_AUTH" && splitPacket[1] == "TRUE")
authLogin(packet); //Does stuff like loading the next screen
But I'm sure there must be a better way to do this, because of this thread:
http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/79660898-aeee-4c7b-8cba-6da55767daa1/ (post 2)
I'm just wondering if someone could give me a push in the right direction on what I should to categorize the packets in a way that's more easy on the eye.
Thank you in advance.
Edit: I did some research, but I cant find a solution to my problem in WCF. So once again, what is the best way to create a packet structure, and how do I utilize it?
I don't want to recreate a protocol, just something like this:
public string sepChar = ";"
public static struct loginPacket
{
string type;
string username;
string password;
}
public void reqLogin()
{
lock (_locker) //Need to make it thread-safe in this case
{
loginPacket.type = "login";
loginPacket.username = "username";
loginPacket.password = "password;
sendPacket(loginPacket);
}
}
public void sendPacket(string packet)
{
// Script to send packet using the struct information
}
I hope that's detailed enough.
I would recommend WCF; if not now, decide to learn it in the future. It has a huge learning curve because it covers not only what you're trying to do, but so many other things.
If you just want a quick-and-dirty solution, then you could use binary serialization. To take this approach, you'll need to define your message types in a dll shared between client and server. Then, remember to use a form of message framing so that the messages don't get munged up in transit.
Things get more complex when you consider versioning. Eventually you'll want to use WCF.
I have used my own communication protocol over TCP 2 years ago, about the base packet class:
public enum PacketType {Login, Hello, FooBar}
public class Packet
{
public long Size;
public PacketType PacketType;
}
public class LoginPacket : Packet
{
public string Login;
public string Password;
}
I do not agree with Size property... because you will only know packet size AFTER it was serialized with binaryFormater. I have used other method, first i serialized packet object and then before putting this byte[] in network stream i wrote BitConverter.GetBytes(SerializedPacketMemoryStream.Length) before each packet bytes.
2nd thing to consider, is that you need to create temp buffer for received bytes[] and wait until you receive full packet and only then desirialize it.
I can send you my client\server code part, it's not very clean i wrote it when i just started learning C#, but it is 100% stable, tested for 3 years in production environment.
protobuf might be what you want.
Here's the .NET port: protobuf-net
My application in c# wants to cominicate with 3rd party Tcp server to send data and recieve back response messages ...The syntax of commands has UShort,ULONG,BYTE type datas
a sample command that needed to send by my app is
USHORT 0xFFFF
USHORT 0x00D0
BYTE 0xDD
then in app i send data as
TcpClient tcpClient = new TcpClient();
tcpClient.Connect("XX.XX.XX.XX",portnumber);
Networkstream ns=tcpClient.GetStream();
StreamWriter sw=new StreamWriter(ns);
sw.Write(0xFFFF);
sw.Write(0x00DD);
sw.Write(0x00);
//or send them bytes
sw.Write(0xFF);
sw.Write(0xFF);
sw.Write(0x00);
sw.Write(0xD0);
sw.Write(0x00);
sw.Write(0x00);
and I read incoming messages over server as
while (true)
{
byte[] buff=new byte[tcpClient.ReceiveBufferSize];
ns.Read(buff, 0, tcpClient.ReceiveBufferSize);
string dv= BitConverter.ToString(buff));
}
//returned data looks like FF-A2-00-23-00-02-00-00-00-00-00-00-D9-2E-20-2E-00-A0-04-00-AE-08
//yes i know this byte syntaxes but returning data is not that i look response for command that i sent..
but returning values are not that i look for
Is there any wrong on my code with sending data to server??
and any recomendations on reading writing datas are welcome...
Nobody can tell you what's wrong with the response when they don't know the protocol employed. The server's sending that because it feels like it... it might be something wrong with your request, or it might be a message indicating that it's offline for service. You can only check it's specification on how to interpret the result it did send, or ask the people who maintain it.
Might also be a good idea to tag this question with the language you're using, so people can make sense of the function calls and whether you're invoking them properly.
I'd also recommend using a packet sniffer (or on Linux simply strace) to show the packets being read and written... you will probably see the mistakes there. Then, use another program to interact with the server that does work, and compare bytes.
I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.