TCPClient stream irregularities? - c#

I have a TCPClient that creates a stream that I read from when DataAvailable.
Every 20 seconds that !DataAvailable I ping the socket with an ACK message to keep the stream from closing.
But I seem to be getting mixed results. It seems like every other time I open the stream(basically restart my Service) I get transport errors.
This is a shortened version of my Connect function:
client = new StreamClient();
client.Connect(new IPEndPoint(clientAddress, senderPort));
stream = client.GetStream();
bool status = SendMessage(seq, sync, MessageTypes.Init);
The SendMessage function does:
if (stream == null) return false;
stream.Write(TransmitBuffer, 0, TransmitMessageLength);
My Close function does:
if (stream != null)
{
SendMessage(seq, sync, MessageTypes.Finish);
stream.Close();
}
stream = null;
client.Close();
client = null;
It is expected that the SendMessage calls will fail occasionally due to nature of the socket.
But sometimes, once I Connect, everything runs fine, no failed messages. But other times the ACK's will fail. When the ACK's fail, I call Close, which will force a Connect and validate the other end of the socket is open. If that fails then I know that end is down. But sometimes that call doesn't fail and then 20 seconds later the ACK does.
Can anyone give me an opinion on why this may happen? Is 20 seconds too long to wait? Am I not closing my end of the socket properly?
The specific error message i'm fighting with is:
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
And it occurs at stream.Write(TransmitBuffer, 0, TransmitMessageLength);

The main thing I see in your implementation that jumps out at me is that it looks like you're treating the Stream as the Connection, and it isn't. Your checks on the Stream instance should instead be checks on the TcpClient instance. I'm not sure if that's the source of your problem, but it definitely looks strange to me.
Instead of this:
stream = client.GetStream();
if (stream != null)
{
SendMessage(seq, sync, MessageTypes.Finish);
stream.Close();
}
stream = null;
I usually do something more like this:
if (client != null)
{
if (client.Connected)
{
client.GetStream().Close();
}
client.Close();
client = null;
}
You should be checking TcpClient.Connected before working with the stream, not the stream itself.
Another thing I would mention is to be sure to always use the async methods to connect, read, and write with your TcpClient. The synchronous ones are easier, but my experience has been that relying on them can get you into trouble.

In general, putting a protocol on top of TCP is a big mistake. It can only make the connection less reliable. TCP already makes a very strong guarantee that data sent from one machine is going to be received by another one on a network. Only very gross external circumstances can make that fail. Things like equipment power loss or unscheduled reboots.
A connection should not be broken unless one of the machines intentionally closes the socket. Which should of course always be done in a predictable way. A logical end to a transaction or an explicit message that a machine is signing-off.
You didn't give any motivation for adding this "ACK protocol" to your connection logic, other than "keep the stream from closing". I think what you are seeing here is that it just doesn't work, it does not in fact keep the stream from closing. So it still goes wrong like it did before you added the protocol layer, you are still getting unexpected "An established connection was aborted" exceptions.
An example of how you made it less reliable is the 20 second timeout check you added. TCP doesn't use a 20 second timeout. It uses an exponential back-off algorithm to check for timeouts. Typically it doesn't give up until at least 45 seconds have passed. So you'll declare the connection dead before TCP does so.
Hard to give advice on how to move forward with this. But clearly it is not by adding a protocol, you tried it and it did not work. You will have to find out why the connection is getting broken unexpectedly. Unfortunately that does require leg-work, you have to get insight in the kind of network equipment and software that sits between your machine and the server. With some expectation that the problem is located at the other end of the wire since that's the one that's hardest to diagnose. Getting the site's network admin involved with your problem is an important first step.

We had a similar problem when developing a network communication library. Rather than use ACK we found the most reliable solution was just to send some null data (aka keep alive). In this case byte[1] with a zero value. This would result in either:
A successful send and the data could be ignored on the
receive end.
A failed send which would immediately cause the connection to be closed and reestablished.
Either of these outcomes ensured the connection was always in a useable state.

Why do you send ACK? You should send SYN.
And you shouldn't send ACK, SYN, RST or setting any other flag for that matter. You're dabbling with TCP internals. You're creating Application level protocol and keeping-alive is part of that, cause, iirc, even there is http://msdn.microsoft.com/en-us/library/windows/desktop/ee470551%28v=vs.85%29.aspx connection will be open until you close it.

Related

TCP socket fails to (reliably) deliver what is sent after a restart of application

I basically create a socket, perform a couple of Sends, Shutdown the socket and Dispose it. This is repeated for the duration of the application session as data is being queued to be sent.
This works without issue until I restart the receiving application. Then I find at the receiving end that the last byte array I send in this sequence is not always received. The receiver knows it should get an End marker at some point, telling it this is it, so it can close the receive socket. I found after an application restart the End marker does not get delivered reliably. The receiver then continues to wait for it, it keeps calling BeginReceive and getting zero bytes. This results in a high CPU load and the receive socket not being closed.
Restarting the receiving application does not fix it, a reboot of the machine however does. Also, restarting all application and using different port numbers fixes it.
I can mitigate this by trying to receive three times with a short wait and giving up if nothing more comes in, assuming it is not going to happen anymore and this is indeed the end even if no explicit End marker was received. But I want to understand what is happening.
The experience suggests it has something to do with Windows recycling sockets under the hood.
Is there something I can do to get a "clean" socket other than using a different port number? Can anyone explain what is happening?
This is the crux of the sending code of which the End constant (which is just 5 fixed bytes) does not always make it to the receiver:
using (socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp))
{
socket.SendBufferSize = 0x10000; // increase send buffer size from default 8K to 64K
socket.Connect(this.Host, this.Port);
bool oneOrMoreSent = false;
while (this.queue.TryDequeue(out IMessage? msg))
{
if (msg != null)
{
byte[] packet = (msg as Message).Packet ??= CreatePacket(msg);
_ = socket.Send(packet);
oneOrMoreSent = true;
}
}
if (oneOrMoreSent)
{
_ = socket.Send(End);
}
socket.Shutdown(SocketShutdown.Send);
}
[Edit]
The article about half closed connections pointed to by JonasH was helpful and confirmed the statement of the other commenter with the ridiculous name. The application I implemented the sending code in is not the most well behaved regarding shutdown logic and we do use kill commands to make it go away when needed so this would explain the emergence of half closed connections where the receiver waits for more data that should come but never does. It is still weird that restarting the receiving application apparently does not release/cleanup the receiving socket.
Understanding this a little better, I settled for a straightforward fix. If the receiver expects more data but repeatedly doesn't get it, waiting 1 ms in between BeginReceive calls, it will now accept that the sending end is dead and return from the handler. So the receiving logic will never stubbornly wait for data, thus locking up the socket forever. This safeguard should have been there in the first place and it fixes my problem.

C# - Check TCP/IP socket status from client side

I'd like to provide my TCP/IP client class with a CheckConnection function so that I can check if something wrong has happened (my own client disconnected, server disconnected, server stuck up,...).
I have something like that:
bool isConnectionActive = false;
if (Client.Poll(100000, SelectMode.SelectWrite) == true)
isConnectionActive = true;
based on what MSDN says:
SelectWrite: true, if processing a Connect(EndPoint), and the connection has succeeded; -or- true if data can be sent; otherwise, returns false.
The point is that, testing this with simple server application, I am getting always true from CheckConnection, even if server-listener has been closed and even if server-application has been shutdown; that's weird, because I expect in those cases that both no connection is being processed (already connected minutes ago) and no data can be sent.
I have already implemented a similar connection check on server side using a combination of Poll with SelectRead and Available and it seems working properly; so now, should I write something similar also on client side? is the SelectWrite approach correct (but I'm using it improperly)?
There are lots of things you can check but none of them are assured to give you the result you are looking for. Even the implementation you have on the server will not work 100% of the time. I guarantee it will fail one day.
There are FIN packets, which should be sent from the client to the server, and vice versa when a connection is closed, but there is no guarantee that these will be delivered, or even processed.
This is generally known as the TCP Half Open problem.
Closing a TCP Socket is a mutually agreed process, you generally have a messaging protocol which tells the other end that it's closing, or you have some predefined set of instructions and you close after that.
The only reliable way to 100% detect if a remote socket is closed is to send some data to it. Only if you get an error back will you know if the socket has closed.
Some applications which don't send a lot of data implement a keep-alive protocol, they simply send/receive a few bytes every minute, so they know that the remote endpoint is present.
You can technically have two servers that are in a connected state and haven't sent data to each other for 10 years. Each end continues to believe that the other end is there until one try's to send some data and finds out it isn't.

socket disconnection notification method

just searched for a posibble solution to indetify when the client disconnecets.
i found this:
public bool IsConnected( Socket s)
{
try
{
return !(s.Poll(1, SelectMode.SelectRead) &&s.Available == 0);
}
catch (SocketException) { return false; }
}
im using a while loop in my main with thread.sleep(500) and running the Isconnectedmthod it works allright when i run it through the visual studio and when i click stop debugging it actually notify me in the server side program but when i just go to the exe in the bin directory and launch it-it's Indeed notify me for a connection but when i close the program (manually from the 'x' button) or through the task manager theIsConnected method apparently return still true.....
im using a simple tcp connection
client = new TcpClient();
client.Connect("10.0.0.2", 10);
server:
Socket s = tcpClient.Client;
while(true)
{
if (!IsConnected(s))
MessageBox.Show("disconnected");
}
(it's running on a thread btw).
any suggestion guys?
i even tried to close the connection when the client closes:
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
client.Close();
s.Close();
Environment.Exit(0);
}
dont know what to do
What you are asking for is not possible. TCP will not report an error on the connection unless an attempt is made to send on the connection. If all your program ever does is receive, it will never notice that the connection no longer exists.
There are some platform-dependent exceptions to this rule, but none involving the simple disappearance of the remote endpoint.
The correct way for a client to disconnect is for it to gracefully close the connection with a "shutdown" operation. In .NET, this means the client code calls Socket.Shutdown(SocketShutdown.Send). The client must then continue to receive until the server calls Socket.Shutdown(SocketShutdown.Both). Note that the shutdown "reason" is generally "send" for the endpoint initiating the closure, and "both" for the endpoint acknowledging and completing the closure.
Each endpoint will detect that the other endpoint has shutdown its end by the completion of a receive operation with 0 as the byte count return value for that operation. Neither endpoint should actually close the socket (i.e. call Socket.Close()) until this two-way graceful closure has completed. I.e. each endpoint has both called Socket.Shutdown() and seen a zero-byte receive operation completion.
The above is how graceful closure works, and it should be the norm for server/client interactions. Of course, things do break. A client could crash, the network might be disconnected, etc. Typically, the right thing to do is to delay recognition of such problems as long as possible; for example, as long as the server and client have no need to actually communicate, then a temporary network outage should not cause an error. Forcing one is pointless in that case.
In other words, don't add code to try to detect a connection failure. For maximum reliability, let the network try to recover on its own.
In some less-common cases, it is desirable to detect connection failures earlier. In these cases, you can enable "keep alive" on the socket (to force data to be sent over the connection, thus detecting interruptions in the connection…see SocketOptionName.KeepAlive) or implement some timeout mechanism (to force the connection to fail if no data is sent after some period of time). I would generally recommend against the use of this kind of technique, but it's a valid approach in some cases.

Getting started with socket programming in C# - Best practices

I have seen many resources here on SO about Sockets. I believe none of them covered the details which I wanted to know. In my application, server does all the processing and send periodic updates to the clients.
Intention of this post is to cover all the basic ideas required when developing a socket application and discuss the best practices. Here are the basic things that you will see with almost all socket based applications.
1 - Binding and listening on a socket
I am using the following code. It works well on my machine. Do I need to take care about something else when I deploy this on a real server?
IPHostEntry localHost = Dns.GetHostEntry(Dns.GetHostName());
IPEndPoint endPoint = new IPEndPoint(localHost.AddressList[0], 4444);
serverSocket = new Socket(endPoint.AddressFamily, SocketType.Stream,
ProtocolType.Tcp);
serverSocket.Bind(endPoint);
serverSocket.Listen(10);
2 - Receiving data
I have used a 255 sized byte array. So when I am receiving data which is more than 255 bytes, I need to call the receive method until I get the full data, right? Once I got the full data, I need to append all the bytes received so far to get the full message. Is that correct? Or is there a better approach?
3 - Sending data and specifying the data length
Since there is no way in TCP to find the length of the message to receive, I am planning to add the length to the message. This will be the first byte of the packet. So client systems knows how much data is available to read.
Any other better approach?
4 - Closing the client
When client is closed, it will send a message to server indicating the close. Server will remove the client details from it's client list. Following is the code used at client side to disconnect the socket (messaging part not shown).
client.Shutdown(SocketShutdown.Both);
client.Close();
Any suggestions or problems?
5 - Closing the server
Server sends message to all clients indicating the shutdown. Each client will disconnect the socket when it receives this message. Clients will send the close message to server and close. Once server receives close message from all the clients, it disconnects the socket and stop listening. Call Dispose on each client sockets to release the resources. Is that the correct approach?
6 - Unknown client disconnections
Sometimes, a client may disconnect without informing the server. My plan to handle this is: When server sends messages to all clients, check the socket status. If it is not connected, remove that client from the client list and close the socket for that client.
Any help would be great!
Since this is 'getting started' my answer will stick with a simple implementation rather than a highly scalable one. It's best to first feel comfortable with the simple approach before making things more complicated.
1 - Binding and listening
Your code seems fine to me, personally I use:
serverSocket.Bind(new IPEndPoint(IPAddress.Any, 4444));
Rather than going the DNS route, but I don't think there is a real problem either way.
1.5 - Accepting client connections
Just mentioning this for completeness' sake... I am assuming you are doing this otherwise you wouldn't get to step 2.
2 - Receiving data
I would make the buffer a little longer than 255 bytes, unless you can expect all your server messages to be at most 255 bytes. I think you'd want a buffer that is likely to be larger than the TCP packet size so you can avoid doing multiple reads to receive a single block of data.
I'd say picking 1500 bytes should be fine, or maybe even 2048 for a nice round number.
Alternately, maybe you can avoid using a byte[] to store data fragments, and instead wrap your server-side client socket in a NetworkStream, wrapped in a BinaryReader, so that you can read the components of your message direclty from the socket without worrying about buffer sizes.
3 - Sending data and specifying data length
Your approach will work just fine, but it does obviously require that it is easy to calculate the length of the packet before you start sending it.
Alternately, if your message format (order of its components) is designed in a fashion so that at any time the client will be able to determine if there should be more data following (for example, code 0x01 means next will be an int and a string, code 0x02 means next will be 16 bytes, etc, etc). Combined with the NetworkStream approach on the client side, this may be a very effective approach.
To be on the safe side you may want to add validation of the components being received to make sure you only process sane values. For example, if you receive an indication for a string of length 1TB you may have had a packet corruption somewhere, and it may be safer to close the connection and force the client to re-connect and 'start over'. This approach gives you a very good catch-all behaviour in case of unexpected failures.
4/5 - Closing the client and the server
Personally I would opt for just Close without further messages; when a connection is closed you will get an exception on any blocking read/write at the other end of the connection which you will have to cater for.
Since you have to cater for 'unknown disconnections' anyway to get a robust solution, making disconnecting any more complicated is generally pointless.
6 - Unknown disconnections
I would not trust even the socket status... it is possible for a connection to die somewhere along the path between client / server without either the client or the server noticing.
The only guaranteed way to tell a connection that has died unexpectedly is when you next try to send something along the connection. At that point you will always get an exception indicating failure if anything has gone wrong with the connection.
As a result, the only fool-proof way to detect all unexpected connections is to implement a 'ping' mechanism, where ideally the client and the server will periodically send a message to the other end that only results in a response message indicating that the 'ping' was received.
To optimise out needless pings, you may want to have a 'time-out' mechanism that only sends a ping when no other traffic has been received from the other end for a set amount of time (for example, if the last message from the server is more than x seconds old, the client sends a ping to make sure the connection has not died without notification).
More advanced
If you want high scalability you will have to look into asynchronous methods for all the socket operations (Accept / Send / Receive). These are the 'Begin/End' variants, but they are a lot more complicated to use.
I recommend against trying this until you have the simple version up and working.
Also note that if you are not planning to scale further than a few dozen clients this is not actually going to be a problem regardless. Async techniques are really only necessary if you intend to scale into the thousands or hundreds of thousands of connected clients while not having your server die outright.
I probably have forgotten a whole bunch of other important suggestions, but this should be enough to get you a fairly robust and reliable implementation to start with
1 - Binding and listening on a socket
Looks fine to me. Your code will bind the socket only to one IP address though. If you simply want to listen on any IP address/network interface, use IPAddress.Any:
serverSocket.Bind(new IPEndPoint(IPAddress.Any, 4444));
To be future proof, you may want to support IPv6. To listen on any IPv6 address, use IPAddress.IPv6Any in place of IPAddress.Any.
Note that you cannot listen on any IPv4 and any IPv6 address at the same time, except if you use a Dual-Stack Socket. This will require you to unset the IPV6_V6ONLY socket option:
serverSocket.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)27, 0);
To enable Teredo with your socket, you need to set the PROTECTION_LEVEL_UNRESTRICTED socket option:
serverSocket.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)23, 10);
2 - Receiving data
I'd recommend using a NetworkStream which wraps the socket in a Stream instead of reading the chunks manually.
Reading a fixed number of bytes is a bit awkward though:
using (var stream = new NetworkStream(serverSocket)) {
var buffer = new byte[MaxMessageLength];
while (true) {
int type = stream.ReadByte();
if (type == BYE) break;
int length = stream.ReadByte();
int offset = 0;
do
offset += stream.Read(buffer, offset, length - offset);
while (offset < length);
ProcessMessage(type, buffer, 0, length);
}
}
Where NetworkStream really shines is that you can use it like any other Stream. If security is important, simply wrap the NetworkStream in a SslStream to authenticate the server and (optionally) the clients with X.509 certificates. Compression works the same way.
var sslStream = new SslStream(stream, false);
sslStream.AuthenticateAsServer(serverCertificate, false, SslProtocols.Tls, true);
// receive/send data SSL secured
3 - Sending data and specifying the data length
Your approach should work, although you probably may not want to go down the road to reinventing the wheel and design a new protocol for this. Have a look at BEEP or maybe even something simple like protobuf.
Depending on your goals, it might be worth thinking about choosing an abstraction above sockets like WCF or some other RPC mechanism.
4/5/6 - Closing & Unknown disconnections
What jerryjvl said :-) The only reliable detection mechanism are pings or sending keep-alives when the connection is idle.
While you have to deal with unknown disconnections in any case, I'd personally keep some protocol element in to close a connection in mutual agreement instead of just closing it without warning.
Consider using asynchronous sockets. You can find more information on the subject in
Using an Asynchronous Server Socket
Using an Asynchronous Client Socket

NetworkStream.Write returns immediately - how can I tell when it has finished sending data?

Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
 
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true

Categories

Resources