TCP Socket buffer and Data Overflow - c#

My application is acting as client for a banks server, sending request and getting response from that server. After response processing is done in database which takes some time, so If sever send response in 0.5 second and db operation takes 1 sec after which only my application again try to receive data from server through begin receive then should data will be accumulated somewhere and if yes where it will be stored. Is there will be some limitation so that data will be overflowed and if it happen whether it will closed this socket. I am declaring my socket buffer size to 1024. If anyone also have some article which clear my doubts please share it.

Can you control what the server is sending to you? In most cases when the receiver operates on the received data, sending an application-level ACK upon finishing the work will allow the sender to know when to send the next request. This will ensure no data is lost (since TCP will make sure it does not get lost in the network).
If you can't change the way the server sends you data, you can consider running the receiver in a different thread, where it will save every incoming request to a cache (either only in RAM or to the HD). Then, a worker thread (or multiple threads) will read requests from that cache and do the work you need. This way you will have full control of the buffering of the data.

I think you will read the data in chunks yourself with the Socket - don't you? Than just don't read more than you can handle. The sender should do the same - if your input-buffer overflows the sender should wait before sending more. But maybe there will be an error in this case. If this is really a big issue just start by downloading the data into a file on your disk and process the data after you got all of it. I don't think your HDD will be slower than your network :)

Related

Where are network stream stored

I'm new in programing and especialy in the network world
Till now i have learned some things about tcp, sync and async programing and understand more or less how things go ( i even wrote a simple client-server program)
But there are still a few issues i couldn't find answer for them.
1. If i (client) write into a network stream (to server) but the server doesn't read the stream untill i run some command. What happens to those bits? Are they being stored somewhere on the server side until they will be read?
2. When i read the "stream" with stream.read command (c#) Where do i actually read from?
It would be nice to be directed to arelevant reading material
I thinkvit will also help me understand more async programing
Thanks
Some documentation related to the NetworkStream class:
https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.networkstream?view=netframework-4.7.2
Also, TCP provides tracking of sent packets, with validation whether the sent packets have been received. As opposed to UDP, where there is no validation whether sent packets have been received.
https://www.howtogeek.com/190014/htg-explains-what-is-the-difference-between-tcp-and-udp/
So it depends on whether it's TCP or UDP in this case.
Also, the difference between for example a FileStream and a NetworkStream in your case, is that a FileStream uses a file as a way to store information (on a hard drive, typically), where another application can read it any time, while a NetworkStream requires both applications having a connection to be active at the same time, because information is being sent/received via a cable typically. It doesn't have any persistent form of storage by default.
EDIT: Stream.Read is probably a blocking function that listens on a port until it decides to stop listening. A server needs to be listening (with Stream.Read) before it can receive information.
In the case of TCP, a server and client first need to complete a 3-way handshake before they can exchange information. https://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml
If i (client) write into a network stream (to server) but the server doesn't read the stream untill i run some command. What happens to those bits?
The data you write into stream will be passed to your OS kernel where it will be scheduled and eventually send over the network. The output queue may be filled, in this case sending function may whether wait, or refuse to send, or wait asynchronously - it depends.
After the data is sent from the client and received on the server, it is collected, assembled and checked in the kernel of the server OS. If everything's fine, the data will wait in input queue until you read it.
Are they being stored somewhere on the server side until they will be read?
So yes, your understanding is correct.
When i read the "stream" with stream.read command (c#) Where do i actually read from?
I don't know C# specifics, but let me pose it this way - on this level of abstraction, data is always read from kernel; from a piece of memory of the receiving computer. The function can wait for data to appear (optionally allowing other tasks to run in the meantime). I wouldn't expect anything low-level here. I also won't be surprised if your stream reader can buffer data or give it our in batches (e. g. readline method), but again - I don't know C# specifics.

C# Server - TCP/IP Socket Efficiency

Good day all!
I am working on a open source server for a game that is closed source - the game operates using TCP/IP sockets (instead of UDP, doh...) So being a connection based protocal, I am contrained to work with this.
My current program structure (dumbed down):
Core Thread
Receive new connection and create a new client object.
Client Object
IOloop (runs on its own thread)
Get data from socket, process packets. (one packet at a time)
Send data buffered from other threads (one packet at a time)
The client will send data immediately (no delay) when it is it's own thread.
I am noticing a huge flaw with the program, mainly it sends data very slow. Because I send data packet by packet, synchronously. (Socket.Send(byte[] buffer))
I would like to send data almost immediately, without delay - asynchronously.
I have tried creating a new thread every time I wanted to send a packet (so each packet sends on it's own managed thread) but this was a huge mess.
My current system uses a synchronous sending with nagle algorithm disabled - but this has the flaw of bottlenecking - send one packet, send operation blocks until TCP to confirms, then send the next... I can issue easily 10 packets every 100ms, and if the packets take 400ms to send, this backs up and breaks. Of course on my Local host I don't get this issue.
So, my question: What would be the best way to send multiple small packets of data? I am thinking of merging the data to send at the end of every IO thread loop to one large byte buffer, to reduce the back and forth delay - but the obvious problem here is this undermines the nagle algorithm avoidance which I had hoped would stop the delays.
How does a synchronous send work? Does it block until the data is confirmed correctly received by the recipient as I believe? Is there a way I can do this without waiting for confirmation? I do understand the packets must all go in order and correctly (as per the protocol specification).
I have done some research and I'm now using the current system:
Async send/receive operations with no threading, using AsyncCallbacks.
So I have the server start a read operation, then callback until done, when a packet is received, it is processed, then a new read operation will begin...
This method drastically reduces system overheads - thread pools, memory etc.
I have used some methods from the following and customised to my liking: https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
Very efficient, highly recommend this approach. For any TCP/IP network application low lag is very important and async callbacks is the best way to do it.

Keeping track of a file transfer percentage

I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.

Thread Management in C# Socket Programming

I have got an TCP Socket Application that client side sending huge string messages to server at same time.And server getting this messages writing them into Access DB.So if there are so many clients server side can not handle each client properly and sometime server closing itself.
Is there any way tell client's thread before send message wait for queue if there is another client currently in queue? With this server don't need to be handle for example 30 client's demand at same time.
For example;
Client sends message => Server processing 1 client's demand
Client is waiting for 1 client's demands for complete than 2 Client sends message => Server processing 1 client's demand
Client is waiting for 2 client's demands for complete.
My problem is appears when I use access db. While opening access connection saving data into tables and closing db is taking time and server go haywire :) If I don't use access db I can get huge messages with no problem.
Yes, you can do that however that s not the most efficient way of doing it. Your scheme is single threaded.
What you want to do is create a threadpool and accept messages from multiple clients and process them as seperate threads.
If that s too complicated. You can have a producer consumer queue within your server, all incoming messages will be stored in a queue while your server will be processing them first come first serve basis.
I think you should consider using a web server for your application, and replacing your protocol with HTTP. Instead of sending huge strings on a TCP stream, just POST the string to the server using your favorite HttpClient class.
By moving to HTTP you more or less solve all your performance issues. The web server already knows how to handle multiple long requests so you don't need to worry about that. Since you're sending big strings, the HTTP overhead is not going to affect your performance.

C# Socket Server Send Question

I have a .net 2.0 application that uses the System.Net.Sockets Send() method to send data at regular intervals e.g. every minute. Essentially it is a socket server. Most of the time there will be no clients connected to receive this data but occasionally a user will run an app which will connect to monitor the data that is being sent at regular intervals. At each interval the most I will ever send will be about 1024 bytes with most messages being much smaller.
My question is what impact on system resources does calling Send() every minute with no one to receive it have? Will this eventually eat up all my memory? I have read that the windows sockets are based on Berkeley Sockets which creates a file descriptor. Is there some low level Standard Output (stdout) being performed and the data that does not get received simply goes into a black hole?
what impact on system resources does calling Send() every minute with no one to receive it have?
Should be none - it should throw an error because the socket isn't open.
Will this eventually eat up all my memory?
No, it should simply throw an error.
Is there some low level Standard Output (stdout) being performed and the data that does not get received simply goes into a black hole?
The data is indeed thrown away, and if you're paying attention you'll see the error.
You should first check to see if the socket is open before sending. If no one is connected, don't send. One extra if statement.
-Adam

Categories

Resources