C# How to split UDP packets? - c#

I'm using the UdpClient class to send packets.
It seems that there's a per-packet size limit, since big packets never reach their destination. I tried to lower the packet size, which allows the packets to reach their destination. I read somewhere that the "standard" packet size limit is 512 bytes.
But I still need to send objects that are way larger than 512 bytes.
So my question is: is there a built-in way in .NET to split up a byte array into smaller packets. Obviously, I need to reassemble the split packets afterwards, too.
I saw the SendFile method in the Socket class, which I guess should be able to automatically split up big files. But the method doesn't allow byte array input (only file name). So it would only work for sending data that is stored on the hard drive, not for in-memory data.

The Send function in the Socket class takes a byte array as a parameter.
http://msdn.microsoft.com/en-us/library/w93yy28a.aspx
You can try this instead.

Sending a large block of data by UDP seems a little odd, because with UDP the datagrams are not guaranteed to arrive at the other side. And even if they all do arrive they're not guaranteed to be in the original order. Are you sure you want to use UDP?
Ciaran Keating was right. TCP was a better choice for my need.

Related

Read from serial port with timeout between bytes in .NET

I would like to read some data from a serial port. However, the packets of data are not well defined - there is no fixed length or end character. What I do to solve this is I wait until the serial port stops sending data for a short time, say 50ms, and then I say one transmission is complete. In Python, I can use the inter_byte_timeout feature of pyserial for this. I am confident the device will send all bytes in one go, so this works.
How do I do it in C#? I have been using System.IO.Ports.SerialPort, but I am open to other solutions.
What doesn't work is the normal ReadTimeout property. That only waits until the first byte. Read will then return an arbitrary amount of data. I also can't use ReceivedBytesThreshold, since I don't know how many bytes it will be in total.

How do you receive packets through TCP sockets without knowing their packet size before receiving?

I am working on a network application that can send live video feed asynchronously from one application to another, sort of like Skype. The main issue I am having is that I want to be able to send the frames but not have to know their size each time before receiving.
The way AForge.NET works when handling images is that the size of the current frame will most likely be different than the one before it. The size is not static so I was just wondering if there was a way to achieve this. And, I already tried sending the length first and then the frame, but that is not what I was looking for.
First, make sure you understand that TCP itself has no concept of "packet" at all, not at the user code level. If one is conceptualizing one's TCP network I/O in terms of packets, they are probably getting it wrong.
Now that said, you can impose a packet structure on the TCP stream of bytes. To do that where the packets are not always the same size, you can only transmit the length before the data, or delimit the data in some way, such as wrapping it in a self-describing encoding, or terminating the data in some way.
Note that adding structure around the data (encoding, terminating, whatever) when you're dealing with binary data is fraught with hassles, because binary data usually is required to support any combination of bytes. This introduces a need for escaping the data or otherwise being able to flag something that would normally look like a delimiter or terminator, so that it can be treated as binary data instead of some boundary of the data.
Personally, I'd just write a length before the data. It's a simple and commonly used technique. If you still don't want to do it that way, you should be specific and explain why you don't, so that your specific scenario can be better understood.

Large files over socket c Sharp

I've read around and i'm hearing mixed things about this. Do you have to split a file into chunks to send it over a stream? or does the OS do that for you?
I have a byte array of about 320,000 values, which i need to get across a network. I can get the first several thousand over but anything after that, it's just set to 0.
I'm using the NetworkStream class, creating a TcpListener / TcpClient, getting the stream from the listener once connected and writing the array to the stream then flushing. Without Success.
Any help would be appreciated.
Cheers,
When using TCP sockets, sending 1024 bytes may or may not be split into chunks by the OS. This behavior at our level should be considered undefined and the receiver should be able to handle a situation like that. What most protocols do is specify a certain (known) message size that contains information such as file size, what range of data it should read, etc. Each message the server constructs will have this header. You as the programmer can specify your chunk sizes, and each chunk must be reconstructed at the receiver level.
Here's a walkthrough:
Server sends a command to the client with information about the file, such as total size, file name, etc.
Client knows how big the command is based on a pre-programmed agreement of header size. If the command is 512 bytes, then the client will keep receiving data until it has filled a 512 byte buffer. Behind the scenes, the operating system could have picked that data up in multiple chunks, but that shouldn't be a worry for you. After all, you only care about reading exactly 512 bytes.
Server begins sending more commands, streaming a file to the client chunk by chunk (512 bytes at a time).
The client receives these chunks and constructs the file over the course of the connection.
Since the client knows how big the file is, it no longer reads on that socket.
The server terminates the connection.
That example is pretty basic, but it's a good groundwork on how communication works.

How should I handle incomplete packet buffers?

I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible).
Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets?
For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing.
Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.
Currently I'm debating between List<Byte> and StringBuilder. Any comparison of the two for this situation would be very helpful.
Read the data from the socket into a buffer. When you get the terminating character, turn it into a message and send it on its way to the rest of your code.
Also, remember that TCP is a stream, not a packet. So you should never assume that you will get everything sent at one time in a single read.
As far as buffers go, you should probably only need one per connection at most. I'd probably start with the max size that you reasonably expect to receive, and if that fills, create a new buffer of a larger size - a typical strategy is to double the size when you run out to avoid churning through too many allocations.
If you have multiple incoming connections, you may want to do something like create a pool of buffers, and just return "big" ones to the pool when done with them.
You could just use a List<byte> as your buffer, so the .NET framework takes care of automatically expanding it as needed. When you find a null terminator you can use List.RemoveRange() to remove that message from the buffer and pass it to the next layer up.
You'd probably want to add a check and throw an exception if it exceeds a certain length, rather than just wait until the client runs out of memory.
(This is very similar to Ben S's answer, but I think a byte array is a bit more robust than a StringBuilder in the face of encoding issues. Decoding bytes to a string is best done higher up, once you have a complete message.)
I would just use a StringBuilder and read in one character at a time, copying and emptying the builder whenever I hit a null terminator.
I wrote this answer regarding Java sockets but the concept is the same.
What's the best way to monitor a socket for new data and then process that data?

How can I set the buffer size for the underneath Socket UDP?

As we know for UDP receive, we use Socket.ReceiveFrom or UdpClient.receive
Socket.ReceiveFrom accept a byte array from you to put the udp data in.
UdpClient.receive returns directly a byte array where the data is
My question is that How to set the buffer size inside Socket. I think the OS maintains its own buffer for receive UDP data, right? for e.g., if a udp packet is sent to my machine, the OS will put it to a buffer and wait us to Socket.ReceiveFrom or UdpClient.receive, right?
How can I change the size of that internal buffer?
I have tried Socket.ReceiveBuffSize, it has no effect at all for UDP, and it clearly said that it is for TCP window. Also I have done a lot of experiments which proves Socket.ReceiveBufferSize is NOT for UDP.
Can anyone share some insights for UDP internal buffer???
I have seen some posts here, for e.g.,
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/c80ad765-b10f-4bca-917e-2959c9eb102a
Dave said that Socket.ReceiveBufferSize can set the internal buffer for UDP. I disagree.
The experiment I did is like this:
27 hosts send a 10KB udp packet to me within a LAN at the same time (at least almost). I have a while-loop to handle each of the packet. For each packet, I create a thread a handle it. I used UdpClient or Socket to receive the packets.
I lost about 50% of the packets. I think it is a burst of the UDP sending and I can't handle all of them in time.
This is why I want to increase the buffer size for UDP. say, if I change the buffer size to 1MB, then 27 * 10KB = 270KB data can be accepted in the buffer, right?
I tried changing Socket.ReceiveBufferSize to many many values, and it just does not have effects at all.
Any one can help?
I have had occasionally dropped packets, and I had success by increasing the buffer.
I used UdpClient, and on it's socket, I set a larger value than the default (which was 8192 bytes). This solved the problem for me.
In my scenario could clearly track that the smaller the size, the more often the drops, the larger, the fewer. I finally chose a value of 2^18 for my application.

Categories

Resources