c# Sockets messages between client and server - c#

I'm currently learning how to use sockets on c#, and have a question regarding how the messages should be between the client and the server.
Currently i have a server application and a client application, and in each application i have some strings that are the commands. When, for example, the client needs the time from the server, i have a string like this:
public const string GET_TIME_COMMAND = "<GET_TIME_COMMAND>";
Then i have a if statement on the server, thats checks if the message sent from the client starts with that string and if so, it sends another message to the client with another command and with the time in a json string.
My question is, is this a good way to do it, and if not could you advise me on another way to go about this?

TCP
Keep in mind that TCP is a stream based connection. You may or may not get the complete command in one message. You may even get multiple commands in one read.
To solve this TCP messages usually have a unique start and stop sequence or byte that may not be part of the message.
(SomeCommand)
Where ( is the start and ) is the stop symbol.
An alternative way os to prepend a header to the actual message that contains the message length.
11 S O M E M E S S A G E
Where 11 is the message length and somemessage is the actual message. You'd usually transmit the length as a byte or ushort, not a string literal.
In both cases you have to read over and over until you have one complete message - then you can dispatch it into the application.
Also TCP is connection based. You have to connect to the remote site. The advantage is that TCP makes sure that all messages are sent in the very order you put them in. TCP will also automatically re-send lost packets and you don't have to worry about that.
UDP
In contrast to that UDP is a message/packet based, but it is not reliable. You may or may not get the message and have to re-send it in some cases. Also UDP doesn't have a notion of a "session". You would have to do that yourself if required.
The answer to your question depends on the protocol used. For TCP this won't work well with your current message format. You'd probably have to prepend a header.
You could use UDP, but then you may have to detect and re-send messages that got lost.

Related

How can I avoid the merging of message from the same socket

similar question:
C# Socket BeginReceive / EndReceive capturing multiple messages
I am currently managing the communication between a website and a winform application, which is done by websocket created that way:
Socket socket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream,
ProtocolType.Tcp);
If the emmitter send two messages A([TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage]) and B([TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]), I expect that the receiver will get
[TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage][TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]
or
[TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage][TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage]
This is indeed the case for the vast majority of message, however the necessary asynchronous nature of the reception sometimes causes a bug when the message A is quite long and the message B quite short, in which the receiver get this:
[TagBeginMessage:lengthMessageA] aaaaaaaaaa[TagBeginMessageB: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]aaaaaa [EndMessageA]
This can still be parsed, although it required unique ending for each message. However, while I didn't see it, I am afraid that this means that the following case is also possible (due to the fact that the socket send their data by packet):
[TagBeginMessage:lengthMessageA] aaaaa[TagBeginMessageB: lengthMessageB] bbbbbbaaaabbbbbbbbbbbbb [EndMessage]aaaaaa [EndMessageA]
This is unparseable. Adding a length (as suggested in http://blog.stephencleary.com/2009/04/sample-code-length-prefix-message.html) to the beginning of the message indicates the problem but doesn't solve it. What can I do to avoid this?
My current solution are:
Send small message. Not elegant but should work.
Send very smallmessage to signal that the buffer of the socket is empty.
It sounds like you are sending data on the same connection from multiple threads simultaneously. That's fine, but if you do that, make sure you lock the connection to the current thread for the duration of sending out a logical package (length + data) - that way you will receive a complete packet (sent from one thread) before receiving anything that a different thread sent.
Under TCP/IP, packages are guaranteed to arrive in the same order that they were sent, but if you send, say, half a logical package (length + data) from one thread then half a package from another, then there is no way for the protocol layer or the receiving end to know that.
You have already linked to Stephen Cleary's blog so you know that tcp requires you to do some form of message framing. Here is the another one of Cleary's posts that describe your problem.
In short you must frame your tcp messages. You should also never see your final, unparsable, example. Tcp will send all of your data in the correct order.
Im not an expert with sockets, but i have some experience. What you could do is split the data thats being sent. Say you have a very long string that has the following: "AAAAAAAABBBBBBCCCCDDDDDEEE". What you could do is instead of sending the entire string through the socket at once you could send all the A characters then the B then the C and then the D etc. Right before the user recieves the message you could merge all the characters together. Thats just an idea.

Using TCP Sockets (C#), How do I detect if a message got interrupted in transit, on the receiver's end?

Im writing a server application for my iPhone app. The section of the server im working on is the relay server. This essentially relays messages between iPhones, through a server, using TCP sockets. The server reads the length of the header from the stream, then reads that number of bytes from the stream. It deserializes the header, and checks to see if the message is to be relayed on to another iPhone (rather than being processed on the server).
If it has to be relayed, it begins reading bytes from the sender's socket, 1024 bytes at a time. After each 1024 bytes are received, it adds those bytes (as a "packet" of bytes) to the outgoing message queue, which is processed in order.
This is all fine, however, but what happens if the sender gets interrupted, so it hasn't sent all its bytes (say, out of the 3,000 bytes it had to send, the sending iPhone goes into a tunnel after 2,500 bytes)?
This means that all the other devices are waiting on the remaining 500 bytes, which dont get relayed to them. Then if the sender (or anyone else for that matter) sends data to these sockets, they think the start of the new message is the end of the last one, corrupting the data.
Obviously from the description above, im using message framing, but I think im missing something. From what I can see, message framing only seems to allow the receiver to know the exact amount of bytes to read from the socket, before assembling them into an object. Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
TCP/IP itself ensures that no bytes go "missing" over a single socket connection.
Things are a bit more complex in your situation, where (if I understand correctly) you're using a server as a sort of multiplexer.
In this case, here's some options off the top of my head:
Have the server buffer the entire message from point A before sending it to point B.
Close the B-side sockets if an abnormal close is detected from the A side.
Change the receiving side of the protocol so that a B-side client can detect and recover from a partial A-stream without killing and re-establishing the socket. e.g., if the server gave a unique id to each incoming A-stream, then the B client would be able to detect if a different stream starts. Or have an additional length prefix, so the B client knows both the entire length to expect and the length for that individual message.
Which option you choose depends on what kind of data you're transferring and how easy the different parts are to change.
Regardless of the solution, be sure to include detection of half-open connections.

How to safely stream data through a server socket to another socket?

I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.

Getting started with socket programming in C# - Best practices

I have seen many resources here on SO about Sockets. I believe none of them covered the details which I wanted to know. In my application, server does all the processing and send periodic updates to the clients.
Intention of this post is to cover all the basic ideas required when developing a socket application and discuss the best practices. Here are the basic things that you will see with almost all socket based applications.
1 - Binding and listening on a socket
I am using the following code. It works well on my machine. Do I need to take care about something else when I deploy this on a real server?
IPHostEntry localHost = Dns.GetHostEntry(Dns.GetHostName());
IPEndPoint endPoint = new IPEndPoint(localHost.AddressList[0], 4444);
serverSocket = new Socket(endPoint.AddressFamily, SocketType.Stream,
ProtocolType.Tcp);
serverSocket.Bind(endPoint);
serverSocket.Listen(10);
2 - Receiving data
I have used a 255 sized byte array. So when I am receiving data which is more than 255 bytes, I need to call the receive method until I get the full data, right? Once I got the full data, I need to append all the bytes received so far to get the full message. Is that correct? Or is there a better approach?
3 - Sending data and specifying the data length
Since there is no way in TCP to find the length of the message to receive, I am planning to add the length to the message. This will be the first byte of the packet. So client systems knows how much data is available to read.
Any other better approach?
4 - Closing the client
When client is closed, it will send a message to server indicating the close. Server will remove the client details from it's client list. Following is the code used at client side to disconnect the socket (messaging part not shown).
client.Shutdown(SocketShutdown.Both);
client.Close();
Any suggestions or problems?
5 - Closing the server
Server sends message to all clients indicating the shutdown. Each client will disconnect the socket when it receives this message. Clients will send the close message to server and close. Once server receives close message from all the clients, it disconnects the socket and stop listening. Call Dispose on each client sockets to release the resources. Is that the correct approach?
6 - Unknown client disconnections
Sometimes, a client may disconnect without informing the server. My plan to handle this is: When server sends messages to all clients, check the socket status. If it is not connected, remove that client from the client list and close the socket for that client.
Any help would be great!
Since this is 'getting started' my answer will stick with a simple implementation rather than a highly scalable one. It's best to first feel comfortable with the simple approach before making things more complicated.
1 - Binding and listening
Your code seems fine to me, personally I use:
serverSocket.Bind(new IPEndPoint(IPAddress.Any, 4444));
Rather than going the DNS route, but I don't think there is a real problem either way.
1.5 - Accepting client connections
Just mentioning this for completeness' sake... I am assuming you are doing this otherwise you wouldn't get to step 2.
2 - Receiving data
I would make the buffer a little longer than 255 bytes, unless you can expect all your server messages to be at most 255 bytes. I think you'd want a buffer that is likely to be larger than the TCP packet size so you can avoid doing multiple reads to receive a single block of data.
I'd say picking 1500 bytes should be fine, or maybe even 2048 for a nice round number.
Alternately, maybe you can avoid using a byte[] to store data fragments, and instead wrap your server-side client socket in a NetworkStream, wrapped in a BinaryReader, so that you can read the components of your message direclty from the socket without worrying about buffer sizes.
3 - Sending data and specifying data length
Your approach will work just fine, but it does obviously require that it is easy to calculate the length of the packet before you start sending it.
Alternately, if your message format (order of its components) is designed in a fashion so that at any time the client will be able to determine if there should be more data following (for example, code 0x01 means next will be an int and a string, code 0x02 means next will be 16 bytes, etc, etc). Combined with the NetworkStream approach on the client side, this may be a very effective approach.
To be on the safe side you may want to add validation of the components being received to make sure you only process sane values. For example, if you receive an indication for a string of length 1TB you may have had a packet corruption somewhere, and it may be safer to close the connection and force the client to re-connect and 'start over'. This approach gives you a very good catch-all behaviour in case of unexpected failures.
4/5 - Closing the client and the server
Personally I would opt for just Close without further messages; when a connection is closed you will get an exception on any blocking read/write at the other end of the connection which you will have to cater for.
Since you have to cater for 'unknown disconnections' anyway to get a robust solution, making disconnecting any more complicated is generally pointless.
6 - Unknown disconnections
I would not trust even the socket status... it is possible for a connection to die somewhere along the path between client / server without either the client or the server noticing.
The only guaranteed way to tell a connection that has died unexpectedly is when you next try to send something along the connection. At that point you will always get an exception indicating failure if anything has gone wrong with the connection.
As a result, the only fool-proof way to detect all unexpected connections is to implement a 'ping' mechanism, where ideally the client and the server will periodically send a message to the other end that only results in a response message indicating that the 'ping' was received.
To optimise out needless pings, you may want to have a 'time-out' mechanism that only sends a ping when no other traffic has been received from the other end for a set amount of time (for example, if the last message from the server is more than x seconds old, the client sends a ping to make sure the connection has not died without notification).
More advanced
If you want high scalability you will have to look into asynchronous methods for all the socket operations (Accept / Send / Receive). These are the 'Begin/End' variants, but they are a lot more complicated to use.
I recommend against trying this until you have the simple version up and working.
Also note that if you are not planning to scale further than a few dozen clients this is not actually going to be a problem regardless. Async techniques are really only necessary if you intend to scale into the thousands or hundreds of thousands of connected clients while not having your server die outright.
I probably have forgotten a whole bunch of other important suggestions, but this should be enough to get you a fairly robust and reliable implementation to start with
1 - Binding and listening on a socket
Looks fine to me. Your code will bind the socket only to one IP address though. If you simply want to listen on any IP address/network interface, use IPAddress.Any:
serverSocket.Bind(new IPEndPoint(IPAddress.Any, 4444));
To be future proof, you may want to support IPv6. To listen on any IPv6 address, use IPAddress.IPv6Any in place of IPAddress.Any.
Note that you cannot listen on any IPv4 and any IPv6 address at the same time, except if you use a Dual-Stack Socket. This will require you to unset the IPV6_V6ONLY socket option:
serverSocket.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)27, 0);
To enable Teredo with your socket, you need to set the PROTECTION_LEVEL_UNRESTRICTED socket option:
serverSocket.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)23, 10);
2 - Receiving data
I'd recommend using a NetworkStream which wraps the socket in a Stream instead of reading the chunks manually.
Reading a fixed number of bytes is a bit awkward though:
using (var stream = new NetworkStream(serverSocket)) {
var buffer = new byte[MaxMessageLength];
while (true) {
int type = stream.ReadByte();
if (type == BYE) break;
int length = stream.ReadByte();
int offset = 0;
do
offset += stream.Read(buffer, offset, length - offset);
while (offset < length);
ProcessMessage(type, buffer, 0, length);
}
}
Where NetworkStream really shines is that you can use it like any other Stream. If security is important, simply wrap the NetworkStream in a SslStream to authenticate the server and (optionally) the clients with X.509 certificates. Compression works the same way.
var sslStream = new SslStream(stream, false);
sslStream.AuthenticateAsServer(serverCertificate, false, SslProtocols.Tls, true);
// receive/send data SSL secured
3 - Sending data and specifying the data length
Your approach should work, although you probably may not want to go down the road to reinventing the wheel and design a new protocol for this. Have a look at BEEP or maybe even something simple like protobuf.
Depending on your goals, it might be worth thinking about choosing an abstraction above sockets like WCF or some other RPC mechanism.
4/5/6 - Closing & Unknown disconnections
What jerryjvl said :-) The only reliable detection mechanism are pings or sending keep-alives when the connection is idle.
While you have to deal with unknown disconnections in any case, I'd personally keep some protocol element in to close a connection in mutual agreement instead of just closing it without warning.
Consider using asynchronous sockets. You can find more information on the subject in
Using an Asynchronous Server Socket
Using an Asynchronous Client Socket

NetworkStream.Write returns immediately - how can I tell when it has finished sending data?

Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
 
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true

Categories

Resources