UDP sending data, something i don´t quite understand - c#

Okay from my knowledge UDP works like this:
You have data you want to send, you say to the UDP client, hey send this data.
The UDP client then says, sure why not, and sends the data to the selected IP and Port.
If it get´s through or in the right order is another story, it have sent the data, you didn´t ask for anything else.
Now from this perspective, it´s pretty much impossible to send data and assemble it.
for example, i have a 1mb image, and i send it.
So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last.
So in theory, if all get´s added, the image should be exactly the same.
But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent.
Anyway, what i want to understand is, why does this work:
void Sending(object sender, NAudio.Wave.WaveInEventArgs e)
{
if (connect == true && MuteMic.Checked == false)
{
udpSend.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
Recieving:
while (connect == true)
{
byte[] byteData = udpReceive.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
So this is basically, it sends the audio buffer through udp.
The receiving par just adds the udp data received in a buffer and plays it.
Now, this works.
And i wonder.. why?
How can this work, how come the data is sent in the right order and added so it appears as a constant audio stream?
Cause if i would to this with an image, i would probably get all the data.
But they would be in a random order probably, and i can only solve that by marking packages and stuff like that. And then there is simply no reason for it, and TCP takes over.
So if someone can please explain this, i just don´t get it.
Here is a code example that is when sending an image, and well, it works. But it seems to work better when the entire byte array isn´t sent, meanign some part of the image is corrupted (not sure why, probably something to do with how the size of the byte array are).
Send:
using (var udpcap = new UdpClient(0))
{
udpcap.Client.SendBufferSize *= 16;
bsize = ms.Length;
var buff = new byte[7000];
int c = 0;
int size = 7000;
for (int i = 0; i < ms.Length; i += size)
{
c = Math.Min(size, (int)ms.Length - i);
Array.Copy(ms.GetBuffer(), i, buff, 0, c);
udpcap.Send(buff, c, adress.Address.ToString(), 1700);
}
Receive:
using (var udpcap = new UdpClient(1700))
{
udpcap.Client.SendBufferSize *= 16;
var databyte = new byte[1619200];
int i = 0;
for (int q = 0; q < 11; ++q)
{
byte[] data = udpcap.Receive(ref adress);
Array.Copy(data, 0, databyte, i, data.Length);
i += data.Length;
}
var newImage = Image.FromStream(new MemoryStream(databyte));
gmp.DrawImage(newImage,0,0);
}

You should be using TCP. You write: it´s pretty much impossible to send data and assemble it. for example, i have a 1mb image, and i send it. So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last. ... But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent. That's exactly what TCP does: ensure that all the pieces of a stream of data are received in the order they were sent, with no omissions, duplications, or modifications. If you really want to re-implement that yourself, you should be reading RFC 793 - it talks at length about how to build a reliable data stream atop an unreliable packet service.
But really, just use TCP.

You're missing a lot of helpful details from your question, but based on the level of understanding presented I'll attempt to answer at a similar level:
You're absolutely right, in general the UDP protocol doesn't guarantee order of delivery or even delivery at all. Your local host is going to send the packets (i.e. parts of your request message) in the order it receives them from the sending application, and from there its up to network components to decide how your request gets delivered. In local networks however (within a handful of hops of the original requester) there aren't really a lot of directions for the packets to go. As such they will likely just flow in line and never see a hiccup.
On the greater internet however, there is likely a wide variety of routing choices available to each router between your requesting host and the destination. Every router along the way can make a choice on which direction parts of your message take. Assuming all paths are equal (which they aren't) and guaranteed reliability of every network segment between the 2 hosts its likely to see similar results as within network (with added latency). Unfortunately neither of the posed conditions can be considered reliable (different paths on the internet perform differently depending on client and server, and no single path on the internet should ever be considered to be reliable (that's why it's a "net").
These are of course based on general observations from my own experience in network support and admin roles. Members of other StackExchange sites may be able to provide better feedback.

Related

UDP Client uses too much memory when sending multiple small parts of a huge array

I'm capturing the desktop with Desktop Duplication API, encoding it in h265 and sending it in parts via UDP (Can't use TCP because I need as low of a latency as possible). I'm doing all of this in C# and Visual Studio, and the memory usage goes through the roof as soon as I uncomment the udpclient.Send().
With it commented everything works great (including the frames capture, the splitting, etc..), as soon as I send it, I reach the 2gb mark in usage in less than 10 seconds and it just keeps ramping up until it crashes. Also, none of the data is lost as my server receives everything so my packet management seems to be good.
int offset = 0;
int packetSize = 200;
for (int i=0; i< clone.Length/packetSize; i++)
{
int diff = clone.Length - offset;
if (diff > packetSize)
Array.ConstrainedCopy(clone, offset, subBuffer, 0, packetSize);
else
Array.ConstrainedCopy(clone, offset, subBuffer, 0, diff);
udpClient.Send( subBuffer, packetSize, "255.255.255.255", 9009);
offset += packetSize;
}
I'm at this stage just experimenting with the splitting and everything, and as stated none of that part creates any issues (I know it could be made better). It's just the udpclient.Send() that makes everything go wrong. Any idea on what could cause this and how I could somehow force some memory management with the send?
After a lot of trial and error "in the dark", I managed to solve it by doing my own socket management.
It turns out doing as many "Send" calls as possible is permitted, but the socket doesn't respond well to it, meaning it still tries to send even when it's "clogged", and starts leaking memory if that makes sense. I ended up sending asynchronously and waiting to know that the bytes were really sent before attempting another send. Here's the code for it, works perfectly, no memory leaks, 0 packet loss. I started from the simple socket implementation I've taken from here then added the custom management part.
Boolean isAvailable = true;
public void Send(byte[] data)
{
if (!isAvailable)
return;
isAvailable = false;
_socket.BeginSend(data, 0, data.Length, SocketFlags.None, (ar) =>
{
_socket.EndSend(ar);
isAvailable = true;
}, state);
}

C# best way to implement TCP Client Server Application

I want to extend my experience with the .NET framework and want to build a client/server application.
Actually, the client/server is a small Point Of Sale system but first, I want to focus on the communication between server and client.
In the future, I want to make it a WPF application but for now, I simply started with a console application.
2 functionalities:
client(s) receive(s) a dataset and every 15/30min an update with changed prices/new products
(So the code will be in a Async method with a Thread.sleep for 15/30 mins).
when closing the client application, sending a kind of a report (for example, an xml)
On the internet, I found lots of examples but i can't decide which one is the best/safest/performanced manner of working so i need some advice for which techniques i should implement.
CLIENT/SERVER
I want 1 server application that handles max 6 clients. I read that threads use a lot of mb and maybe a better way will be tasks with async/await functionallity.
Example with ASYNC/AWAIT
http://bsmadhu.wordpress.com/2012/09/29/simplify-asynchronous-programming-with-c-5-asyncawait/
Example with THREADS
mikeadev.net/2012/07/multi-threaded-tcp-server-in-csharp/
Example with SOCKETS
codereview.stackexchange.com/questions/5306/tcp-socket-server
This seems to be a great example of sockets, however, the revisioned code isn't working completely because not all the classes are included
msdn.microsoft.com/en-us/library/fx6588te(v=vs.110).aspx
This example of MSDN has a lot more with Buffersize and a signal for the end of a message. I don't know if this just an "old way" to do this because in my previous examples, they just send a string from the client to the server and that's it.
.NET FRAMEWORK REMOTING/ WCF
I found also something about the remoting part of .NET and WCF but don' know if I need to implement this because i think the example with Async/Await isn't bad.
SERIALIZED OBJECTS / DATASET / XML
What is the best way to send data between it? Juse an XML serializer or just binary?
Example with Dataset -> XML
stackoverflow.com/questions/8384014/convert-dataset-to-xml
Example with Remoting
akadia.com/services/dotnet_dataset_remoting.html
If I should use the Async/Await method, is it right to something like this in the serverapplication:
while(true)
{
string input = Console.ReadLine();
if(input == "products")
SendProductToClients(port);
if(input == "rapport")
{
string Example = Console.ReadLine();
}
}
Here are several things anyone writing a client/server application should consider:
Application layer packets may span multiple TCP packets.
Multiple application layer packets may be contained within a single TCP packet.
Encryption.
Authentication.
Lost and unresponsive clients.
Data serialization format.
Thread based or asynchronous socket readers.
Retrieving packets properly requires a wrapper protocol around your data. The protocol can be very simple. For example, it may be as simple as an integer that specifies the payload length. The snippet I have provided below was taken directly from the open source client/server application framework project DotNetOpenServer available on GitHub. Note this code is used by both the client and the server:
private byte[] buffer = new byte[8192];
private int payloadLength;
private int payloadPosition;
private MemoryStream packet = new MemoryStream();
private PacketReadTypes readState;
private Stream stream;
private void ReadCallback(IAsyncResult ar)
{
try
{
int available = stream.EndRead(ar);
int position = 0;
while (available > 0)
{
int lengthToRead;
if (readState == PacketReadTypes.Header)
{
lengthToRead = (int)packet.Position + available >= SessionLayerProtocol.HEADER_LENGTH ?
SessionLayerProtocol.HEADER_LENGTH - (int)packet.Position :
available;
packet.Write(buffer, position, lengthToRead);
position += lengthToRead;
available -= lengthToRead;
if (packet.Position >= SessionLayerProtocol.HEADER_LENGTH)
readState = PacketReadTypes.HeaderComplete;
}
if (readState == PacketReadTypes.HeaderComplete)
{
packet.Seek(0, SeekOrigin.Begin);
BinaryReader br = new BinaryReader(packet, Encoding.UTF8);
ushort protocolId = br.ReadUInt16();
if (protocolId != SessionLayerProtocol.PROTOCAL_IDENTIFIER)
throw new Exception(ErrorTypes.INVALID_PROTOCOL);
payloadLength = br.ReadInt32();
readState = PacketReadTypes.Payload;
}
if (readState == PacketReadTypes.Payload)
{
lengthToRead = available >= payloadLength - payloadPosition ?
payloadLength - payloadPosition :
available;
packet.Write(buffer, position, lengthToRead);
position += lengthToRead;
available -= lengthToRead;
payloadPosition += lengthToRead;
if (packet.Position >= SessionLayerProtocol.HEADER_LENGTH + payloadLength)
{
if (Logger.LogPackets)
Log(Level.Debug, "RECV: " + ToHexString(packet.ToArray(), 0, (int)packet.Length));
MemoryStream handlerMS = new MemoryStream(packet.ToArray());
handlerMS.Seek(SessionLayerProtocol.HEADER_LENGTH, SeekOrigin.Begin);
BinaryReader br = new BinaryReader(handlerMS, Encoding.UTF8);
if (!ThreadPool.QueueUserWorkItem(OnPacketReceivedThreadPoolCallback, br))
throw new Exception(ErrorTypes.NO_MORE_THREADS_AVAILABLE);
Reset();
}
}
}
stream.BeginRead(buffer, 0, buffer.Length, new AsyncCallback(ReadCallback), null);
}
catch (ObjectDisposedException)
{
Close();
}
catch (Exception ex)
{
ConnectionLost(ex);
}
}
private void Reset()
{
readState = PacketReadTypes.Header;
packet = new MemoryStream();
payloadLength = 0;
payloadPosition = 0;
}
If you're transmitting point of sale information, it should be encrypted. I suggest TLS which is easily enabled on through .Net. The code is very simple and there are quite a few samples out there so for brevity I'm not going to show it here. If you are interested, you can find an example implementation in DotNetOpenServer.
All connections should be authenticated. There are many ways to accomplish this. I've use Windows Authentication (NTLM) as well as Basic. Although NTLM is powerful as well as automatic it is limited to specific platforms. Basic authentication simply passes a username and password after the socket has been encrypted. Basic authentication can still, however; authenticate the username/password combination against the local server or domain controller essentially impersonating NTLM. The latter method enables developers to easily create non-Windows client applications that run on iOS, Mac, Unix/Linux flavors as well as Java platforms (although some Java implementations support NTLM). Your server implementation should never allow application data to be transferred until after the session has been authenticated.
There are only a few things we can count on: taxes, networks failing and client applications hanging. It's just the nature of things. Your server should implement a method to clean up both lost and hung client sessions. I've accomplished this in many client/server frameworks through a keep-alive (AKA heartbeat) protocol. On the server side I implement a timer that is reset every time a client sends a packet, any packet. If the server doesn't receive a packet within the timeout, the session is closed. The keep-alive protocol is used to send packets when other application layer protocols are idle. Since your application only sends XML once every 15 minutes sending a keep-alive packet once a minute would able the server side to issue an alert to the administrator when a connection is lost prior to the 15 minute interval possibly enabling the IT department to resolve a network issue in a more timely fashion.
Next, data format. In your case XML is great. XML enables you to change up the payload however you want whenever you want. If you really need speed, then binary will always trump the bloated nature of string represented data.
Finally, as #NSFW already stated, threads or asynchronous doesn't really matter in your case. I've written servers that scale to 10000 connections based on threads as well as asynchronous callbacks. It's all really the same thing when it comes down to it. As #NSFW said, most of us are using asynchronous callbacks now and the latest server implementation I've written follows that model as well.
Threads are not terribly expensive, considering the amount of RAM available on modern systems, so I don't think it's helpful to optimize for a low thread count. Especially if we're talking about a difference between 1 thread and 2-5 threads. (With hundreds or thousands of threads, the cost of a thread starts to matter.)
But you do want to optimize for minimal blocking of whatever threads you do have. So for example instead of using Thread.Sleep to do work on 15 minute intervals, just set a timer, let the thread return, and trust the system to invoke your code 15 minutes later. And instead of blocking operations for reading or writing information over the network, use non-blocking operations.
The async/await pattern is the new hotness for asynchronous programming on .Net, and it is a big improvement over the Begin/End pattern that dates back to .Net 1.0. Code written with async/await is still using threads, it is just using features of C# and .Net to hide a lot of the complexity of threads from you - and for the most part, it hides the stuff that should be hidden, so that you can focus your attention on your application's features rather than the details of multi-threaded programming.
So my advice is to use the async/await approach for all of your IO (network and disk) and use timers for periodic chores like sending those updates you mentioned.
And about serialization...
One of the biggest advantages of XML over binary formats is that you can save your XML transmissions to disk and open them up using readily-available tools to confirm that the payload really contains the data that you thought would be in there. So I tend to avoid binary formats unless bandwidth is scarce - and even then, it's useful to develop most of the app using a text-friendly format like XML, and then switch to binary after the basic mechanism of sending and receiving data have been fleshed out.
So my vote is for XML.
And regarding your code example, well ther's no async/await in it...
But first, note that a typical simple TCP server will have a small loop that listens for incoming connections and starts a thread to hanadle each new connection. The code for the connection thread will then listen for incoming data, process it, and send an appropriate response. So the listen-for-new-connections code and the handle-a-single-connection code are completely separate.
So anyway, the connection thread code might look similar to what you wrote, but instead of just calling ReadLine you'd do something like "string line = await ReadLine();" The await keyword is approximately where your code lets one thread exit (after invoking ReadLine) and then resumes on another thread (when the result of ReadLine is available). Except that awaitable methods should have a name that ends with Async, for example ReadLineAsync. Reading a line of text from the network is not a bad idea, but you'll have to write ReadLineAsync yourself, building upon the existing network API.
I hope this helps.

C# network stream fragmented data

My Context
I have a TCP networking program that sends large objects that have been serialized and encoded into base64 over a connection. I wrote a client library and a server library, and they both use NetworkStream's Begin/EndReadandBegin/EndWrite. Here's the (very much simplified version of the) code I'm using:
For the server:
var Server = new TcpServer(/* network stuffs */);
Server.Connect();
Server.OnClientConnect += new ClientConnectEventHandler(Server_OnClientConnect);
void Server_OnClientConnect()
{
LargeObject obj = CalculateLotsOfBoringStuff();
Server.Write(obj.SerializeAndEncodeBase64());
}
Then the client:
var Client = new TcpClient(/* more network stuffs */);
Client.Connect();
Client.OnMessageFromServer += new MessageEventHandler(Client_OnMessageFromServer);
void Client_OnMessageFromServer(MessageEventArgs mea)
{
DoSomethingWithLargeObject(mea.Data.DecodeBase64AndDeserialize());
}
The client library has a callback method for NetworkStream.BeginRead which triggers the event OnMessageFromServer that passes the data as a string through MessageEventArgs.
My Problem
When receiving large amounts of data through BeginRead/EndRead, however, it appears to be fragmented over multiple messages. E.G. pretend this is a long message:
"This is a really long message except not because it's for explanatory purposes."
If that really were a long message, Client_OnMessageFromServer might be called... say three times with fragmented parts of the "long message":
"This is a really long messa"
"ge except not because it's for explanatory purpos"
"es."
Soooooooo.... takes deep breath
What would be the best way to have everything sent through one Begin/EndWrite to be received in one call to Client_OnMessageFromServer?
You can't. On TCP, how things arrive is not necessarily the same as how they were sent. It the job of your code to know what constitutes a complete message, and if necessary to buffer incoming data until you have a complete message (taking care not to discard the start of the next message I the process).
In text protocols, this usually means "spot the newline / nul-char". For binary, it usually means "read the length-header in the preamble the the message".
TCP is a stream protocol, and has no fixed message boundaries. This means you can receive part of a message or the end of one and the beginning of another.
There are two ways to solve this:
Alter your protocol to add end-of-message markers. This way you continuously receive until you find the special marker. This can however lead that you have a buffer containing the end of one message and the beginning of another which is why I recommend the next way.
Alter protocol to first send the length of the message. Then you will know exactly how long the message is, and can count down while receiving so you won't read the beginning of the next message.

C# - TcpClient - Detecting end of stream?

I am trying to interface an ancient network camera to my computer and I am stuck at a very fundamental problem -- detecting the end of stream.
I am using TcpClient to communicate with the camera and I can actually see it transmitting the command data, no problems here.
List<int> incoming = new List<int>();
TcpClient clientSocket = new TcpClient();
clientSocket.Connect(txtHost.Text, Int32.Parse(txtPort.Text));
NetworkStream serverStream = clientSocket.GetStream();
serverStream.Flush();
byte[] command = System.Text.Encoding.ASCII.GetBytes("i640*480M");
serverStream.Write(command, 0, command.Length);
Reading back the response is where the problem begins though. I initially thought something simple like the following bit of code would have worked:
while (serverStream.DataAvailable)
{
incoming.Add(serverStream.ReadByte());
}
But it didn't, so I had a go another version this time utilising ReadByte(). The description states:
Reads a byte from the stream and
advances the position within the
stream by one byte, or returns -1 if
at the end of the stream.
so I thought I could implement something along the lines of:
Boolean run = true;
int rec;
while (run)
{
rec = serverStream.ReadByte();
if (rec == -1)
{
run = false;
//b = (byte)'X';
}
else
{
incoming.Add(rec);
}
}
Nope, still doesn't work. I can actually see data coming in and after a certain point (which is not always the same, otherwise I could have simply read that many bytes every time) I start getting 0 as the value for the rest of the elements and it doesn't halt until I manually stop the execution. Here's what it looks like:
So my question is, am I missing something fundamental here? How can I detect the end of the stream?
Many thanks,
H.
What you're missing is how you're thinking of a TCP data stream. It is an open connection, like an open phone line - someone on the other end may or may not be talking (DataAvailable), and just because they paused to take a breath (DataAvailable==false) it doesn't mean they're actually DONE with their current statement. A moment later they could start talking again (DataAvailable==true)
You need to have some kind of defined rules for the communication protocol ABOVE TCP, which is really just a transport layer. So for instance perhaps the camera will send you a special character sequence when it's current image transmission is complete, and so you need to examine every character sent and determine if that sequence has been sent to you, and then act appropriately.
Well you can't exactly says EOS on a network communication ( unless the other party drop the connection ) usually the protocol itself contains something to signal that the message is complete ( sometimes a new line, for example ). So you read the stream and feed a buffer, and you extract complete message by applying these strategies.

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources