I know how to send byte data to a server via the udp protocol. How can I activate some flags on the package that I am sending? How can I tel if a package that is received has some flags activated. Moreover how can I read the checksum contained by the package? If I want to send a package with no data and just a flag how can I do that? I want to do this with c#. I dont want to modify the local endpoint nor the remote endpoint nor anything else about each package.
Edit
The reason why I want to do this is because I have tried so hard to do tcp punch holing. I opened a bounty question at forward traffic from port X to computer B with c# "UDP punch hole into firewall" anyways I managed to do udp punch holing. As a result I want to make udp reliable.
In other words I want to create an algorith that will enable me to send udp packages reaching their destination in the right order. If I could set on and read some of the flags then that will facilitate this algorithm.
I know it will not be eassy to create but at least I want to give it a try. I been waisting a lot of time finding a c# library that does this...
UDP already has a checksum.
Did you mean to ask about something like RUDP? This allows you to guarantee packet delivery and ordering (both, none, or either). Lidgren implements a reliable UDP protocol.
If you want to roll your own, read up on how SEQ and ACK work in TCP; and emulate that over UDP (which is what Lidgren basically does).
In all honesty, I think it's going to be just a difficult to get the TCP side of things working as it is to tackle this question, and since TCP is designed to do what your asking, then I personally would take the "Right tool, for the right job" approach.
That said....
If you really must use UDP, then you can forget about adding anything at protocol level, the underlying transport layer will just ignore it.
Instead your going to have to design your own byte level protocol (Almost going back to the days of RS232 and serial comms...)
Your going to need your data, and that's going to have to be wrapped in some way, say a byte that defines start of packet, then maybe the data, or some flags etc...
I would look at data structures such as TLV (Used by a lot of smart cards) - TLV stands for Tag-Length-Value, so might represent something like:
0x10 0x04 0x01 0x02 0x03 0x04
Where command code 0x10 might mean update ticker, 0x04 - 4 bytes of data to follow... then 4 bytes of data.
As for flags, and check sums, well flags are easy, you can get 8 of them in a byte, and just flip them using AND masks..
0x00 AND 0x01 = 0x01
0x01 AND 0x80 = 0x81
Check sums are a little harder, but it really depends what level of accuracy you need. Essentially a check sum is some kind of mathematical algorithm that uses the contents of your data to create some magical number that can be easily computed at the receiving end.
This could be something as simple as adding all the values together then anding the result by some magic number, then adding that onto the end of your packet before sending the bytes.
Then do the same at the receiving end and compare the results.
Of course if you really don't want to go down this route of wrapping, doing protocols and all that jazz yourself, then you could learn a lesson from the bad old days of serial comms.
There are still plenty of open source implementations of things like XModem, ZModem, Kermit, YModem and many others still floating around, since most of these can (and did) work with byte streams, then it shouldn't be too difficult to make them work with a byte stream over UDP instead of a byte stream over a serial port.
Related
I am working on a network application that can send live video feed asynchronously from one application to another, sort of like Skype. The main issue I am having is that I want to be able to send the frames but not have to know their size each time before receiving.
The way AForge.NET works when handling images is that the size of the current frame will most likely be different than the one before it. The size is not static so I was just wondering if there was a way to achieve this. And, I already tried sending the length first and then the frame, but that is not what I was looking for.
First, make sure you understand that TCP itself has no concept of "packet" at all, not at the user code level. If one is conceptualizing one's TCP network I/O in terms of packets, they are probably getting it wrong.
Now that said, you can impose a packet structure on the TCP stream of bytes. To do that where the packets are not always the same size, you can only transmit the length before the data, or delimit the data in some way, such as wrapping it in a self-describing encoding, or terminating the data in some way.
Note that adding structure around the data (encoding, terminating, whatever) when you're dealing with binary data is fraught with hassles, because binary data usually is required to support any combination of bytes. This introduces a need for escaping the data or otherwise being able to flag something that would normally look like a delimiter or terminator, so that it can be treated as binary data instead of some boundary of the data.
Personally, I'd just write a length before the data. It's a simple and commonly used technique. If you still don't want to do it that way, you should be specific and explain why you don't, so that your specific scenario can be better understood.
I have a requirement to create a UDP file transfer system. I know TCP is guaranteed and much more reliable, but I need to transfer huge files between locations and I think the speed advantage in this project outweighs the benefits using TCP. I’m just starting this project, but would like some guidance if anyone has done this before. I will be writing both sides (client and server) so I don’t need to worry about feature limitations in other products.
In a nutshell I need to:
Take large files and send them in chunks
Be able to throttle bandwidth from the client
Create some kind of packet numbering system for errors,
retransmitions and assembling files by chunk on server (yes, all the
stuff we get from TCP for free :-)
Configurable datagram size – I think some firewalls complain if they
get too big?
Anything else I may be missing
I’m starting this journey using UdpClient and would like to write this app in C#. Any words of wisdom (other than to use TCP)?
It’s been done with huge success. We used to use RocketStream.com, but they sold their product to another company for internal use only. We typically get speeds that are 30X faster than FTP or raw TCP byte transfers.
in regards to
Configurable datagram size – I think some firewalls complain if they get too big?
one datagram could be up to 65,536 bytes. cosidering all the ip header information you'll end up with 65,507 bytes for payload. but you have to consider how all devices are configured along you network path. typically most devices have set an MTU-size of 1500 bytes so this will be typically your limit "on the internet". if you set up a dedicated network between your locations you can increase your MTU an all devices.
further in regards to
Create some kind of packet numbering system for errors, retransmitions and assembling files by chunk on server (yes, all the stuff we get from TCP for free :-)
i think the best thing in your case would be to implement a application level protocol. like
32 byte sequence number
8 byte crc32 checksum (correct me on the bytesize)
any bytes left can be used for data
hope this gives you some bit of a direction
::edit::
from experience i can tell you UDP is about 10-15% faster than TCP on dedicated and UDP-tuned networks.
I'm not convinced the speed gain will be tremendous, but an interesting experiment. Such a protocol will look and behave more like one of the traditional modem based protocols, and probably ZModem is one of the better examples to get some inspiration from (implements an ack window, adaptive block size, etc).
There are already some people who tried this, check out this site.
That would be cool if you succeed.
Don't go in it without WireShark. You'll need it.
For the algorithm, I guess that you have pretty much the idea of how to start. Maybe some pointers:
start with MTU that will be common to both endpoints, and use packets of only that size, so you'll have control over packet fragmentation (when you come down from TCP, I hope that this is for the more control over low level stuff).
you'll probably want to look into STUN or TURN for punching the holes into NATs.
look into ZModem - that also has a nostalgic value :)
since you want to squeeze maximum from you link, try to put as much as you can in the 'control packets' so you don't waste a single byte.
I wouldn't use any CRC on packet level, because I guess that networks underneath are handling that stuff.
I just had an idea...
break up a file in 16k chunks (length is arbitrary)
create HASH of each chunk
transmit all hashes of the chunks, using any protocol
at receiving end, prepare by hashing everything you have on your hard drive, network, I mean everything, in 16k chunks
compare received hashes to your local hashes and reconstruct the data you have
download the rest using any protocol
I know that I'm 6 months behind the schedule, but I just couldn't resist.
Others have said more interesting things, but I would like to point out that you need to make sure you use a good compression algorithm. That will make a world of difference.
Also I would recommend validating your assumptions as to the speed improvement possibility, make a trivial system of sending data (not worrying about loss, corruption, or other problems) and see what bandwidth you get. This will at least give you a realistic upper bound for what can be done.
Finally consider why you are taking on this task? Will the speed gains be worth it after the amount of time spent developing it?
Please tell me how to read data from com port in c #, if the data is received in bytes, but of variable length, that is, the answer may be a byte array 20 and 50, that is the main question is, how do you know that the device stopped responding?
The most important part is defining the protocol bits used. You should have both start and stop bits that will tell your SerialPort object when to stop reading. Usually you don't have to care after this since your callback function will contain the data in an array.
http://msmvps.com/blogs/coad/archive/2005/03/23/SerialPort-_2800_RS_2D00_232-Serial-COM-Port_2900_-in-C_2300_-.NET.aspx
You don't. COM ports are a bit like TCP - they're a streaming service - they only transfer 7 or 8 bits at a time, (depending on how you set the port up, normally 8 bits).
If you want to send anything more complex than a byte, you need to build a protocol on top. If your data is text, a CR or null at the end will often do. If it's values in the whole set of bytes 0-255, then you need a more complex protocol to ensure that the framing of the data-units is received correctly. Maybee your requirements can be met by a simple timeout, eg 'if no chars received for 500ms, that's the end of the data unit', but such timeout-protocols are obviously low performance and subject to failures:(
Rgds,
Martin
This is kind of a branch off of my other question. Read it if you like, but it's not necessary.
Basically, I realized that in order to effectively use C#'s BeginReceive() on large messages, I need to either (a) read the packet length first, then read exactly that many bytes or (b) use an end-of-packet delimiter. My question is, are either of these present in protocol buffers? I haven't used them yet, but going over the documentation it doesn't seem like there is a length header or a delimiter.
If not, what should I do? Should I just build the message then prefix/suffix it with the length header/EOP delimiter?
You need to include the size or end marker in your protocol. Nothing is built into stream based sockets (TCP/IP) other than supporting an indefinite stream of octets arbitrarily broken up into separate packets (and packets can be spilt in transit as well).
A simple approach would be for each "message" to have a fixed size header, include both a protocol version and a payload size and any other fixed data. Then the message content (payload).
Optionally a message footer (fixed size) could be added with a checksum or even a cryptographic signature (depending on your reliability/security requirements).
Knowing the payload size allows you to keep reading a number of bytes that will be enough for the rest of the message (and if a read completes with less, doing another read for the remaining bytes until the whole message has been received).
Having a end message indicator also works, but you need to define how to handle your message containing that same octet sequence...
Apologies for arriving late at the party. I am the author of protobuf-net, one of the C# implementations. For network usage, you should consider the "[De]SerializeWithLengthPrefix" methods - that way, it will automatically handle the lengths for you. There are examples in the source.
I won't go into huge detail on an old post, but if you want to know more, add a comment and I'll get back to you.
I agree with Matt that a header is better than a footer for Protocol Buffers, for the primary reason that as PB is a binary protocol it's problematic to come up with a footer that would not also be a valid message sequence. A lot of footer-based protocols (typically EOL ones) work because the message content is in a defined range (typically 0x20 - 0x7F ASCII).
A useful approach is to have your lowest level code just read buffers off of the socket and present them up to a framing layer that assembles complete messages and remembers partial ones (I present an async approach to this (using the CCR) here, albeit for a line protocol).
For consistency, you could always define your message as a PB message with three fields: a fixed-int as the length, an enum as the type, and a byte sequence that contains the actual data. This keeps your entire network protocol transparent.
TCP/IP, as well as UDP, packets include some reference to their size. The IP header contains a 16-bit field that specifies the length of the IP header and data in bytes. The TCP header contains a 4-bit field that specifies the size of the TCP header in 32-bit words. The UDP header contains a 16-bit field that specifies the length of the UDP header and data in bytes.
Here's the thing.
Using the standard run-of-the-mill sockets in Windows, whether you're using the System.Net.Sockets namespace in C# or the native Winsock stuff in Win32, you never see the IP/TCP/UDP headers. These headers are stripped off so that what you get when you read the socket is the actual payload, i.e., the data that was sent.
The typical pattern from everything I've ever seen and done using sockets is that you define an application-level header that precedes the data you want to send. At a minimum, this header should include the size of the data to follow. This will allow you to read each "message" in its entirety without having to guess as to its size. You can get as fancy as you want with it, e.g., sync patterns, CRCs, version, type of message, etc., but the size of the "message" is all you really need.
And for what it's worth, I would suggest using a header instead of an end-of-packet delimiter. I'm not sure if there is a signficant disadvantage to the EOP delimiter, but the header is the approach used by most IP protocols I've seen. In addition, it just seems more intuitive to me to process a message from the beginning rather than wait for some pattern to appear in my stream to indicate that my message is complete.
EDIT: I have only just become aware of the Google Protocol Buffers project. From what I can tell, it is a binary serialization/de-serialization scheme for WCF (I'm sure that's a gross oversimplification). If you are using WCF, you don't have to worry about the size of the messages being sent because the WCF plumbing takes care of this behind the scenes, which is probably why you haven't found anything related to message length in the Protocol Buffers documentation. However, in the case of sockets, knowing the size will help out tremendously as discussed above. My guess is that you will serialize your data using the Protocol Buffers and then tack on whatever application header you come up with before sending it. On the receive side, you'll pull off the header and then de-serialize the remainder of the message.
I am building a C# application, using the server-client model, where the server is sending an image (100kb) to the client through a socket every 50ms...
I was using TCP, but besides the overhead of this protocol, sometimes the client ended up with more than one image on the socket. And I still haven't though of a clever mechanism to split the bytes of each image (actually, I just need the most recent one).
I tried using UDP, but got to the conclusion that I can't send 100kb dgrams, only 64kb ones. And even so, I shouldn't use more than 1500bytes; otherwise the packet would be divided along the network and the chances of losing parts of the packet would be greater.
So now I'm a bit confused. Should I continue using TCP and put some escaping bytes in the end of each image so the client can separate them? Or should I use UDP, send dgrams of 1500 bytes and come up with a mechanism for ordering and recovering?
The key goal here is transmitting the images very fast. I don't mind losing some on the way as long as the client keeps receiving newer ones.
Or should I use another protocol? Thanks in advance!
You should consider using Real-time Transport Protocol (aka RTP).
The underlying IP protocol used by RTP is UDP, but it has additional layering to indicate time stamps, sequence order, etc.
RTP is the main media transfer protocol used by VoIP and video-over-IP systems. I'd be quite surprised if you can't find existing C# implementations of the protocol.
Also, if your image files are in JPEG format you should be able to produce an RTP/MJPEG stream. There are quite a few video viewers that already have native support for receiving and displaying such a stream, since some IP webcams output in that format.
First of all, your network might not be able to handle this no matter what you do, but I would go with UDP. You could try splitting up the images into smaller bits, and only display each image if you get all the parts before the next image has arrived.
Also, you could use RTP as others have mentioned, or try UDT. It's a fairly lightweight reliable layer on top of UDP. It should be faster than TCP.
I'd recommend using UDP if:
Your application can cope with an image or small burst of images not getting through,
You can squeeze your images into 65535 bytes.
If you're implementing a video conferencing application then it's worth noting that the majority use UDP.
Otherwise you should use TCP and implement an approach to delimit the images. One suggestoin in that regard is to take a look at the RTP protocol. It's sepcifically designed for carrying real-time data such as VoIP and Video.
Edit: I've looked around quite a few times in the past for a .Net RTP library and apart from wrappers for non .Net libraries or half completed ones I did not have much success. I just had another quick look and this may be of this one ConferenceXP looks a bit more promising.
The other answers cover good options re: UDP or a 'real' protocol like RTP.
However, if you want to stick with TCP, just build yourself a simple 'message' structure to cover your needs. The simplest? length-prefixed. First, send the length of the image as 4 bytes, then send the image itself. Easy enough to write the client and server for.
If the latest is more important than every picture, UDP should be your first choice.
But if you're dealing with frames lager than 64K your have to-do some form of re-framing your self. Don't be concerned with fragmented frames, as you'll have to deal with it or the lower layer will. And you only want completed pictures.
What you will want is some form of encapsulation with timestamps/sequences.