Background:
C# .net synchronous Tcp server
a TcpClient object is assigned by blocking on a TcpListener with the AcceptTcpClient method
once there's a TcpClient object, I pass it to a thread that invokes the client's GetStream method to create a NetworkStream
this NetworkStream is looped over, in each iteration doing a networkStream.Read(someBuffer, 0, 4096)
right now client and server are located on the same network, with no congestion to speak of
my server has plenty of memory to spare
if I load my server software onto another machine, the problem goes away
the kicker: traffic from a network Linux box gets through fine and on time
My server has been functioning just fine for several months. However, over the past weekend instead of receiving small groups of bytes in quick succession, the place where the process begins ( tcpListener.AcceptTcpClient() ) only occurs every couple of minutes. So my server sits idle, then gets 30-50 client requests all bundled into one huge block of bytes. Needless to say this causes a huge delay and put strain on my server. If the clump of client requests is big enough, it can take my server 30 minutes to catch up.
In logging built into my clients, I can see them do network writes, and flush between each one. So the clients are functioning correctly.
This reaks of some kind of system intervention. Is my Tcp server (as describe above) bad, or is something in Windows interfering with my traffic, and how can I tell?
Thanks guys.
You might want to install some packet capture software at each Tcp endpoint. You'd be surprised. I'm suffering with a similar problem now, almost completely identical actually.
When I put capture software in place I noted that traffic between endpoints was fast and on time as expected.
Related
I'm programming a Socket/Client/Server library for C#, since I do a lot of cross-platform programming, and I didn't find mono/dotnet/dotnet core enough efficient in high-performance socket handling.
Sice linux epoll unarguably won the performance and usability "fight", I decided to use an epoll-like interface as common API, so I'm trying to emulate it on Windows environment (windows socket performance is not that important for me than linux, but the interface is). To achieve this I use the Winsock2 and Kernel32 API's directly with Marshaling and I use IOCP.
Almost everything works fine, except one: When I create a TCP server with winsock, and connect to it (from local or from a remote machine via LAN, does not matter) with more than 10000 connections, all connections are accepted, no problem at all, when all connections send data (flood) from the client side to the server, no problem at all, server receives all packets, but when I disconnect all clients at the same time, the server does not recognize all disconnection events (i.e. 0 byte read/available), usually 1500-8000 clients stuck. The completion event does not get triggered, therefore I can not detect the connection loss.
The server does not crash, it continues accept new connections, and everything works as expected, only the lost connections do not get recognized.
I've read that - because using overlapped IO needs pre-allocated read buffer - IOCP on reading locks these buffers and releases the locks on completion, and if too many events happen in the same time it can not lock all affected buffers because of an OS limit, and this causes IOCP hang for indefinite time.
I've read that the solution to this buffer-lock problem is I should use a zero-sized buffer with null-pointer to the buffer itself, so the read event will not lock it, and I should use real buffer only when I read real data.
I've implemented the above workaround and it works, except the original problem, after disconnecting many-thousands of clients in the same time, a few-thousand stuck.
Of course I keep up the possibility my code is wrong, so I made a basic server with dotnet's built in SocketAsyncEventArgs class (as the official example describes), that basically does the same using IOCP, and the results are the same.
Everything works fine, except the thousands of client disconnecting in the same time, a few-thousand of disconnection (read on disconnect) events does not get recognized.
I know I should do IO operation and check the return value if the socket is still can perform the IO, and if not, then disconnect it. The problem is in some cases I have nothing to tell the socket, I just receive data, or if I do it periodically this would be almost the same as polling, and would cause high load with thousands of connections, wasted CPU work.
(I use closing the clients numerous closing methods, from gaceful disconnection to proper TCP Socket closing, both on windows and linux clients, results are always the same)
My questions:
Is there any known solution to this problem?
Is there any efficient way to recognize TCP (graceful) connection closing by remote?
Can I somehow set a read-timeout to overlapped socket read?
Any help would be appreciated, thank You!
My goal is to build a c'รจ TCP server that must transmit data that should otherwise be transmitted through UDP, that is unfortunately not an option for me.
The server must transmit a constant stream of realtime data, example: a sequence of numbers
0 1 2 3 4 5 etc... and the client must display only the last one.
If everything goes well the client will receive every number, but if for some reason a packet gets lost I want to detect it and not send it again, but send the latest number instead.
So the question is: is it possible to detect TCP packet loss in C# and clear the buffer without flushing?
Thanks for your time! Fabio Iotti
UPDATE
Dropping all new packets except the lost one is also fine!
UPDATE
Also detecting ack packets whould suffice.
"No."1
At the application level TCP can never have "lost" a packet. It will keep retrying (at the TCP/transport level) such that the application only sees a complete and accurate stream - hence TCP being a "reliable" protocol.
If too many packets are dropped the stream will eventually timeout, but that's about the extent of detecting a failed connection. ("Heartbeats" and throughput monitoring can be used to detect a slow/failing or high-latency connection, but only from a high-level.)
1Some tools like Wireshark can play with TCP streams at a lower abstraction level, but this is generally not fitting (or viable) for application code and is not exposed in the [.NET] TCP/Stream API.
I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.
I have a program that tells you if your computer is online or not. The way I do it is with the help of a Server that basically sends UDP packets to clients. Clients then respond back letting the server know that they are online. If a client does not respond for the next 5 seconds then I mark it as offline.
Anyways I was testing this service and from a different computer I sent thousands of udp packets to the Server. After sending so many packages the server was not working the way it was supposed to.
So I know if someone is sending me a lot of packets. The problem is how do I block those packages so that my Server can still work?
Edit Possible Solution
I think I will implement the following solution what u guys think?
I will require 2 or more Servers now. If one client finds that the server is not responding then it will then talk to the Second Server. So the attacker will also have to know that there is a second server. Depending on how secure you want to be you could have even 5 servers. I guess that if the attacker knows that there are 5 servers then I just wasted my time and money right? lol
The general solution to this is you buy extra hardware that goes in front of the computer that looks at the incoming packets.
What that extra hardware does depends on what solution you want to use, you could have that hardware distribute the requests to many servers all running the same software (this would make the hardware you added a Load Balancer). You also could have the hardware detect that a unusually large number of packets coming from a single address, the hardware could then start dropping packets from that address instead of forwarding them on to the server (this would make the hardware you added a Stateful Firewall)
There are more options beyond those two but all solutions revolve around reducing the load on the server (usually shifting the load to another piece of hardware dedicated to taking the load). You could potentially upgrade your software to be more resilient to packet floods but unless your current software is written very poorly it won't buy you too much more capacity.
I have a .net 2.0 application that uses the System.Net.Sockets Send() method to send data at regular intervals e.g. every minute. Essentially it is a socket server. Most of the time there will be no clients connected to receive this data but occasionally a user will run an app which will connect to monitor the data that is being sent at regular intervals. At each interval the most I will ever send will be about 1024 bytes with most messages being much smaller.
My question is what impact on system resources does calling Send() every minute with no one to receive it have? Will this eventually eat up all my memory? I have read that the windows sockets are based on Berkeley Sockets which creates a file descriptor. Is there some low level Standard Output (stdout) being performed and the data that does not get received simply goes into a black hole?
what impact on system resources does calling Send() every minute with no one to receive it have?
Should be none - it should throw an error because the socket isn't open.
Will this eventually eat up all my memory?
No, it should simply throw an error.
Is there some low level Standard Output (stdout) being performed and the data that does not get received simply goes into a black hole?
The data is indeed thrown away, and if you're paying attention you'll see the error.
You should first check to see if the socket is open before sending. If no one is connected, don't send. One extra if statement.
-Adam