C# TCP server unreliability, dropping outdated buffered messages - c#

My goal is to build a c'รจ TCP server that must transmit data that should otherwise be transmitted through UDP, that is unfortunately not an option for me.
The server must transmit a constant stream of realtime data, example: a sequence of numbers
0 1 2 3 4 5 etc... and the client must display only the last one.
If everything goes well the client will receive every number, but if for some reason a packet gets lost I want to detect it and not send it again, but send the latest number instead.
So the question is: is it possible to detect TCP packet loss in C# and clear the buffer without flushing?
Thanks for your time! Fabio Iotti
UPDATE
Dropping all new packets except the lost one is also fine!
UPDATE
Also detecting ack packets whould suffice.

"No."1
At the application level TCP can never have "lost" a packet. It will keep retrying (at the TCP/transport level) such that the application only sees a complete and accurate stream - hence TCP being a "reliable" protocol.
If too many packets are dropped the stream will eventually timeout, but that's about the extent of detecting a failed connection. ("Heartbeats" and throughput monitoring can be used to detect a slow/failing or high-latency connection, but only from a high-level.)
1Some tools like Wireshark can play with TCP streams at a lower abstraction level, but this is generally not fitting (or viable) for application code and is not exposed in the [.NET] TCP/Stream API.

Related

Deadlocked when both endpoints do Socket.SendAsync

So I have written this client/server socket application that uses the SocketAsyncEventArgs "method" for doing async sockets.
Using the same library I have used for many other applications, I now for the first time experience a situation that I never anticipated.
Our new client/server application when started, starts to send lot's of data in both directions.
When done in unit-tests using mock-objects (without delays) to mimic normal socket operations, it all works well.
But in real situations using real sockets, we get a sort of deadlock where both endpoints are stuck in a Socket.SendAsync() operation (yes it returned true, was not synchronously handled)
My idea is that the receive buffer of both parties are full, and the tcp stack is not acknowleding any frames anymore. (connected to 127.0.0.1)
So I made the receivebuffer twice as large as the sendbuffer, but unfortunately it is not that simple due to the nature of our "protocol", and how we determine to send or receive.
I now have to re-think the method that determines when to start sending and when to start receiving.
A complicating factor is, that the purpose of this connection is to mutliplex multiple bi-directional general purpose communication channels over this socket connection. That means that there is no pre-determined sequence of communication, all channels may have their own protocols.
Of course, there is a tls initiation, handshake and authentication, which all work well, but when the connection becomes operational, and the channels start their own communications, the only sure thing is that received data has a size and channelnumber as a header.
After each operation, I check to see if there is any waiting data in the receivebuffer, or by checking Socket.Available.
This combined with measuring how much data was received since last sent operation, and how full the transmitbuffer is getting, I decide to receive more or start sending, or do nothing, and poll again in xx ms.
I now realize that this is wronge.
Am I trying to accomplish something that is simply not possible using only one socket connection?
Anyone every tried to accomplish something simular, or know a good way of accomplish a safe way that does not introduce these odd lock-ups.
Thanks,
Theo.

Receive streaming of UDP packets sent at fixed, clocked rate

I have the requirement of receiving a stream of UDP packets at a fixed rate of 1000 packets per second. They are composed of 28 bytes of payload, where the first four bytes (Uint32) are the packet sequence number. The sender is an embedded device on the same local network, and addresses and ports are mutually known. I am not expected to receive every packet, but the host should be designed so that it doesn't add to the intrinsic UDP protocol limitations.
I have only limited previous experience with UDP and Sockets in general (just casual streaming from sensor app on android phone to PC, for a small demo project).
I am not sure what is the most sensible way to receive all the packets at a fast rate. I imagine some sort of loop, or else some sort of self-retriggering receive operation. I have studied the difference between synchronous and asynchronous receives, and also the Task-based Asynchronous Pattern (TAP), but I don't feel very confident yet. It looks like a simple task, but its understanding is still complicated to me.
So the questions are:
Is UdpClient class suitable to a scenario like this, or am I better off going with the Socket class? (or another one, by the way)
Should I use synchronous or asynchronous receiving? If I use synchronous, I am afraid of losing packets, and if I use asynchronous, how should I clock/throttle the receive operations so that I don't start them at a rate that's too large?
I thought about a tight loop testing for UdpClient.Available() like below. Would it be a good design choice?
while (running)
{
if (udpSocket.Available() > 0)
{
// do something (what?)
}
}
Just read from your UdpClient as fast as you can (or as fast as is convenient for your application) using UdpClient.Receive(). The Receive() operation will block (wait) if there is no incoming packet, so calling Receive() three times in a row will always return you three UDP packets.
Generally, when receiving UDP packets you do not have to worry about the sending rate. Only the sender needs to worry about the sending rate. So the opposite advice for the sender is: Do not send a lot of UDP packets as fast as you can.
So my answers would be:
Yes, use UdpClient.
Use synchronous receiving. Call UdpClient.Receive() in a loop running in your receive thread, of in the main loop of your program, what ever you like.
You do not need to check for available data. UdpClient.Receive() will block until there is data available.
Do not be afraid of loosing packets when receiving UDP: You will almost never lose UDP packets on the receiving host because you do something wrong in receiving. UDP packets mostly get dropped by network components like routers somewhere on the network path, which cannot send (forward) UDP packets as fast as they receive them.
Once the packets arrive at your machine, your OS does a fair amount of buffering (for example 128k on Linux), so even if your application is unresponsive for say 1 second it will not lose any UDP packets.

Keeping track of a file transfer percentage

I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.

Async Sockets example which shows passing an object?

I need to create a server process which can push high frequency data (1000 updates per second) to around 50 client. I'm thinking the best way you do this is using async sockets with the SocketAsyncEventArgs type.
The client -> server connections will be long running at least several days to indefinite. I plan to have a server process listening and the clients connect and the server starts pushing the data to the clients.
Can someone point me to or show me an example of how to do this? I can't find any example showing a server process pushing an object to a client.
EDIT: This is over a gigibit LAN. Using windows server with 16 cores and 24gb ram
thanks
First, some more requirements from your side is required. You have server with lots of muscle, but it will fail miserably if you don't do what has to be done.
can the client live without some of the data? I mean, does the stream of the data need to reach other side in proper order, without any drops?
how big is 'the data'? few bytes or?
fact: scheduling interval on windows is 10 msec.
fact: no matter WHEN you send, clients will receive it depending on lots of stuff - network config, number of routers in-between, client processor load, and so on. so you need some kind of timestamping here
Depending on all this, you could design a priority queue with one thread servicing it and sending out UDP datagrams for each client. Also, since (4) is in effect, you can 'clump' some of your data together and have 10 updates per second of 100 data.
If you want to achieve something else, then LAN will be required here with lots of quality network equipment.
If you want to use .NET Sockets to create this server-client project, then this is a good outline of what's needed:
Since the server will be transferring data to several clients simultaneously, you'll need to use the asynchronous Socket.Beginxxx methods or the SocketAsyncEventArgs class.
You'll have clients connect to your server. The server will accept those connections and then add the newly connected client to an internal clients list.
You'll have a thread running within the server, that periodically sends notifications to all sockets in the clients list. If any exceptions/errors occurs while sending data to a socket, then that client is removed from the list.
You'll have to make sure that access to the clients list is synchronized since the server is a multithreaded application.
You don't need to worry about buffering your send data since the TCP stack takes care of that. If you do not want to buffer your data at all (i.e. have the socket send data immediately), then set Socket.NoDelay to true.
It doesn't seem like you need any data from your clients, but if you do, you'd have to make sure your server has a Socket.BeginReceive loop if using Socket.BeginXXX pattern or Socket.ReceiveAsync method if using SocketAsyncEventArgs.
Once you have the connection and transmission of data between server and client going, you then need to worry about serialization and deserialization of objects between client and server.
Serialization which occurs on the server is easy, since you can use the BinaryFormatter or other encoders to encode your object and dump the data onto the socket.
Deserialization on the other hand, which occurs on the client, can be pretty complex because an object can span multiple packets and you can have multiple objects in one packet. You essentially need a way to identify the beginning and end of an object within the stream, so that you can pluck out the object data and deserialize it.
One way to do this is to embed your data in a well known protocol, like HTTP, and send it using that format. Unfortunately, this also means you'd have to write a HTTP parser at the client. Not an easy task.
Another way is to leverage an existing encoding scheme like Google's protocol buffers. This approach would require learning how to use the protocol buffers stack.
You can also embed the data in an XML structure and then have a stream-to-XML decoder on your client side. This is probably the easiest approach but the least efficient.
As you can see, this is not an easy project, but you can get started with the Socket.BeginSend examples here and here, and the SocketAsyncEventArgs example here
Other Tips:
You can improve the reliability of your communication by having the client maintain two connections to the server for redundancy purposes. The reason being that TCP connections take a while to establish, so if one fails, you can still receive data from the other one while the client attempts to reconnect the failed connection.
You can look into using TCPClient class for the client implementation, since it's mostly reading a stream from a network connection.
What about rendezvous or 29 west? It would save reinventing the wheel. Dunno about amqp or zeromq they might work fine too....

TCP or UDP help with a server/client in c#?

Can anyone help, i trying to figure what i need to do, i have been given the tasks of writing a server and a client in TCP (UDP). basically multiple clients will connect to the server.. and the server sends MESSSAGES to the client.
I have no problem in creating the server and client but with tcp i am unsure whcih way to go. DOes the .net 3.5 support everything or do i need to go on the hunt for some component?
I am looking for soome good examples with c# for TCP or UDP. THis is where i am not 100% sure .. as far as i know there is UDP and TCP ... 1 is connected and 1 is not.. So which way do i go and can c# support both?? Advantages /Disadvantages?
Say if the server has to support multiple clients that i only need to open 1 port or do i need to open 2?
Also if a client crashes i need for it not to effect the SERVER hence the server can either ignore it and close connection if one is open or timeout a connection... If in fact a connection is needed again going back to tcp udp
Any ideas where i shoudl beging and choosing which protocol and amount of ports i am going to need to assign?
thanks
UDP cons:
packet size restriction means you can only send small messages (less than about 1.5k bytes).
Lack of stream makes it hard to secure UDP: hard to do an authentication scheme that works on lossy exchange, and just as hard to protect the integrity and confidentiality of individual messages (no key state to rely on).
No delivery guarantee means your target must be prepared to deal with message loss. Now is easy to argue that if the target can handle a total loss of messages (which is possible) then why bother to send them in the first place?
UDP Pros:
No need to store a system endpoint on the server for each client (ie. no socket). This is one major reason why MMO games connected to hundred of thousands of clients use UDP.
Speed: The fact that each message is routed individually means that you cannot hit a stream congestion like TCP can.
Broadcast: UDP can broadcast to all listeners on a network segment.
You shouldn't even consider UDP if you're considering TCP too. If you're considering TCP means you are thinking in terms of a stream (exactly once in order messages) and using UDP will put the burden of fragmentation, retry and acknowledgment, duplicate detection and ordering in your app. You'll be in no time reinventing TCP in your application and it took all engineers in the word 20 years to get that right (or at least as right as it is in IPv4).
If you're unfamiliar with these topics I recommend you go with the flow and use WCF, at least it gives you the advantage of switching in and out with relative ease various transports and protocols. Will be much harder to change your code base from TCP to UDP and vice versa if you made the wrong choice using raw .Net socket components.
It sounds to me like you're not clear on the distinction between TCP and UDP.
TCP is connection oriented. i.e. 2 peers will have a dedicated connection. Packet delivery and ordering is guaranteed. Typically a server will present a port, and multiple clients can connect to that port (think of a HTTP server and browsers).
UDP is connectionless. It doesn't guarantee packet delivery, nor ordering. You can implement broadcast and multicast mechanisms very easily. If you need some sort of reliability, you will have to implement this on top of UDP. Sometimes you may not care, and simply issue requests and retry on no response (SNMP does this). Because it's connectionless, you don't really worry about peers being up/down. You just have to retry if required.
So your choice of protocol is dictated by the above. e.g. does your client require a dedicated connection to the server ? Are you transmitting the same data to multiple clients ? Can you tolerate packet loss (e.g. real time price updates etc.). Perhaps it's feasible to use both TCP and UDP for different requirements within your app (e.g. TCP for registering orders, UDP for transmitting price updates/events?)
I'd consider your requirements, and familiarise yourself with the limitations and features of TCP and UDP. That should make things a little clearer.
Is there a requirement to do this at such a low level? Why not use WCF? It fully supports messaging over TCP/IP, using binary data transfer, but it's at a much higher level of abstraction than raw sockets.
Everything you need is in .Net 3.5 (and probably below). Check out the documentation and examples with the UdpClient class at MSDN for insight into how to write your client/server. A quick google found some sample code for a server and client at www.java2s.com among many other networking examples in C#. Don't be put off by the domain name.

Categories

Resources