Need of checksum in serial communication - c#

I am currently working on a project involving a Serial COM from PC ( USB TO SERIAL application coded in C# ) to an embedded platform (STM32F4).
I saw that in some cases it's mandatory to have a checksum in a communication frame.
The Communication configuration:
Baud-rate = 115200,
No Parity bit,
One StopBit,
No Handshake,
Frame Length : 16 bytes
Is it worth adding a checksum in my application? What are the reasons why i should (not) have this checksum?
Thank you for your answer.

Yes you must have a checksum. The only acceptable non-hobbyist solution is a proper checksum based on CRC. Most common industry standard is "CRC-16-CCITT" (polynomal 0x1021). This will catch any single-bit error, most double-bit errors and some burst errors.
Even though you'll only use RS-232 in an office environment(?), any EMI caused by crappy consumer electronics could cause glitches and incorrect data. There is a lot of such crappy electronics around: for example it is not all that uncommon for the electronics in your PC to have poor EMC performance. In particular, there are countless of USB-to-serial adapters with downright awful quality.
The UART hardware in itself has no error detection worth mentioning: it is ancient 1960s technology. On the hardware level, it only checks data integrity based on start and stop bits, and will miss out any errors in between. (Parity checking is equally poor).
Alternatively, you could perhaps get an USB to RS-485 adapter instead and use RS-485, which is far more rugged as it has differential signals. But that requires that you have RS-485 transceivers on the target side too.

It is customary to have a checksum in order to verify the correctness of the data although serial communication is relatively reliable. You will definitely need a sync made up of at least two bytes that will always be assigned a specific value which you don't expect to appear in your data. The sync is used in the receiving side to find the start of each message sent because it is a stream communication and not a packet based communication.

Related

SharpSNMP max-repetitions increase causes buffer size exception through GPRS

I am trying to send SNMP requests to a remote location.
I am using the SharpSNMP 8.5.0 library and the Snmp.BulkWalk example from a code project post ( here ).
In the example, they use 10 as max-repetitions and using sniffing software I noticed that is creating multiple datagram packets to make the walk within in the subtree. Actually I am getting 120 packets results back every time. So I decided to try a higher max-repetitions number and I noticed that the packets number is going down, actually I can get all the data in one packet. Now I have another problem: the remote device is using GPRS when I snmpwalk on the device from the server using GPRS, I get a timeout or a buffer out of size error. When I run the same solution on my local PC and I access the remote device from my router(no GPRS involved) I don't get any errors and get all the data!
Can someone explain this behavior? Does it have to do with a GPRS limitation? GPRS is unreliable? Or is it a network limitation on the server?
(The MTU in the server is 1500). Does anyone have an experience on the best practices and the optimal packet size that can send through SNMP-UDP datagram packets?
Though I am the author of that library, I could not answer the GPRS part, as I am not a mobile network expert.
What I could answer is the packet number part, which is relatively simple if you check out the definition of "max-repititions",
https://www.webnms.com/snmp/help/snmpapi/snmpv3/v2c/maxrepetition.html
By setting a larger value to this parameter, a single packet can contain more results, and obviously less packets are needed.
I used 10 in that Code Project article, because it was just an example. You might see from the link above that other libraries might use 50 as the default.
Regarding best practices for SNMP packet size, I've always been told that you should avoid exceeding the network MTU. In other words, set the max-repetitions so that the Ethernet frames don't regularly exceed 1500 bytes. (Of course, this assumes that the size of your table cells is predicable.)
While using larger packets should work on most well-configured networks, it's advisable to avoid having fragmented packets on the network. Perhaps packet re-assembly might create larger overhead in the networking equipment. And if you're going to fragment the PDUs over several packets anyway, the drawback of having to do a few more back-and-forth requests is not that bad.
For example, Cisco equipment seems to follow this best practice, and it's recommended in a Microsoft article.
(BTW, next time you have two separate questions, consider posting them as two questions!)

Reliability over a serial connection

I have two computers communicating over a serial modem.
I would like to have a reliability protocol on that line.
I have been looking into PPP, SLIP and RATP. Not all of them are the best fit, and I do not want to write all that code, especially if there is a good code base for that online.
Is there a library or code project in C# that can be used for that purpose?
If not what protocol should you recommend to implement?
The connection speed is 9600, but the amount of data sent is not very big, and speed is not a big issue. Simplicity and ease of development is!
I always just add a CRC to each message, but my higher level protocols are self-synchronizing and loss tolerant by virtue of unsolicited state reports -- if a command is lost and the state doesn't change, that becomes apparent on the next state report. Depending on whether your requirement is to detect or correct errors, and whether you can tolerate extra delays for retransmissions, you might need to look into a forward error correcting code.
Concepts of interest include message sequence numbers, acknowledgement, go-back-N vs selective retransmit, and minimum distance between codes (Hamming distance).
I definitely suggest you look at the design of TCP. The basics are really pretty minimal for guaranteed in-order delivery.

Serial Message Separation

I am trying to implement serial communications between a micro-controller and a c# windows application.
I have everything working correctly in the computer to micro-controller direction. But am having trouble working out how exactly to implement communications in the other direction.
My messages are composed of 4 bytes
B0 – Address/Name of the value being sent
B1 – High Byte
B2 – Low Byte
B3 – Checksum = Addition of bytes 0-2
To make sure complete messages are received, I have the micro-controller give up on the current message being received if there is longer than 20ms between bytes, this appears to work well and can tolerate faults in communications that may cause a loss in synchronisation.
I am not sure how I can implement this delay if at all from within a c# application, as I know you have a lot less fine control over timing.
I have seen other ASCII protocols that send a start and stop character but am not sure how to implement this when sending binary data where my values can take any value possible in the byte, and might happen to be whatever the start or stop character is.
I need to keep the micro-controller side basic as I have limited resources, and the controllers primary task requires very precise (sub us range) that converting ascii to decimal may have.
Does anybody have recommendations on how I should be implementing this from either the microprocessor or the computer side.
EDIT
I have looked at some of the other questions on here but they all seem to refer to much larger ASCII based messages.
This is a pretty bad idea. It assumes that the machine on the other end of the wire is capable of providing the same guarantee. 20 msec is no issue at all on a micro-controller. A machine that boots Linux or Windows, no way. A thread that's busy writing to the serial port can easily lose the processor for hundreds of milliseconds. A garbage collection in the case of C#.
Just don't optimize for the exceptional case, there's no point. Timeouts should be in the second range, ten times larger than the worst case you expect. Make a protocol further reliable by framing it. Always a start byte that give the receiver a chance to re-synchronize. Might as well include a length byte, you are bound to need it sooner or later. Favor a CRC over a checksum. Check out RFC916 for a recoverable protocol suggestion, albeit widely ignored. Worked well when I used it, although it needed extra work to get connection attempts reliable, you have to flush the receive buffer.
You can set the read timeout to 20ms by using the following command,
serialPort.ReadTimeout = 20;
This will make the read operation time out after 20ms, in which case you can do what you want.
Don't use ReadExisting with this timeout as it does not rely on the read timeout,
instead use Read() or readByte() and check for a Timeout Exception
incedently the same can be done with WriteTimeout even on successful writes. so take care on that.

CRC checking done automatically on Tcp/Ip?

I have question to ask.
I wonder, while sending data with TcpClient, is there any sort of CRC
checking algorithm that works automatically? Or I have to implement my own algorithm and resend the data if it doesn't arrive to remote host correctly?
Any ideas?
My best regards...
The TCP checksum is fairly weak, but considered good enough for a reliable stream protocol. Often, there are more robust checksums being performed at the data link layer - i.e., ethernet (IIRC) uses CRC-32.
If you really want to roll your own, here is a detailed guide to the theory and implementation.
There is a checksum in TCP. But, as the same article says,
The TCP checksum is a weak check by modern standards. Data Link Layers with high bit error rates may require additional link error correction/detection capabilities. The weak checksum is partially compensated for by the common use of a CRC or better integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame.
So if you have concerns about that, maybe it won't hurt to add another checking.
TCP contains a checksum and the TCP/IP stack will detect broken packets. So you don't need to implement any error detection algorithms on your own unless you want to.
The whole point of using TCP is that the error checking is built in the protocol. So you don't have to worry about this kind of things.

UDP File Transfer - Yes, UDP

I have a requirement to create a UDP file transfer system. I know TCP is guaranteed and much more reliable, but I need to transfer huge files between locations and I think the speed advantage in this project outweighs the benefits using TCP. I’m just starting this project, but would like some guidance if anyone has done this before. I will be writing both sides (client and server) so I don’t need to worry about feature limitations in other products.
In a nutshell I need to:
Take large files and send them in chunks
Be able to throttle bandwidth from the client
Create some kind of packet numbering system for errors,
retransmitions and assembling files by chunk on server (yes, all the
stuff we get from TCP for free :-)
Configurable datagram size – I think some firewalls complain if they
get too big?
Anything else I may be missing
I’m starting this journey using UdpClient and would like to write this app in C#. Any words of wisdom (other than to use TCP)?
It’s been done with huge success. We used to use RocketStream.com, but they sold their product to another company for internal use only. We typically get speeds that are 30X faster than FTP or raw TCP byte transfers.
in regards to
Configurable datagram size – I think some firewalls complain if they get too big?
one datagram could be up to 65,536 bytes. cosidering all the ip header information you'll end up with 65,507 bytes for payload. but you have to consider how all devices are configured along you network path. typically most devices have set an MTU-size of 1500 bytes so this will be typically your limit "on the internet". if you set up a dedicated network between your locations you can increase your MTU an all devices.
further in regards to
Create some kind of packet numbering system for errors, retransmitions and assembling files by chunk on server (yes, all the stuff we get from TCP for free :-)
i think the best thing in your case would be to implement a application level protocol. like
32 byte sequence number
8 byte crc32 checksum (correct me on the bytesize)
any bytes left can be used for data
hope this gives you some bit of a direction
::edit::
from experience i can tell you UDP is about 10-15% faster than TCP on dedicated and UDP-tuned networks.
I'm not convinced the speed gain will be tremendous, but an interesting experiment. Such a protocol will look and behave more like one of the traditional modem based protocols, and probably ZModem is one of the better examples to get some inspiration from (implements an ack window, adaptive block size, etc).
There are already some people who tried this, check out this site.
That would be cool if you succeed.
Don't go in it without WireShark. You'll need it.
For the algorithm, I guess that you have pretty much the idea of how to start. Maybe some pointers:
start with MTU that will be common to both endpoints, and use packets of only that size, so you'll have control over packet fragmentation (when you come down from TCP, I hope that this is for the more control over low level stuff).
you'll probably want to look into STUN or TURN for punching the holes into NATs.
look into ZModem - that also has a nostalgic value :)
since you want to squeeze maximum from you link, try to put as much as you can in the 'control packets' so you don't waste a single byte.
I wouldn't use any CRC on packet level, because I guess that networks underneath are handling that stuff.
I just had an idea...
break up a file in 16k chunks (length is arbitrary)
create HASH of each chunk
transmit all hashes of the chunks, using any protocol
at receiving end, prepare by hashing everything you have on your hard drive, network, I mean everything, in 16k chunks
compare received hashes to your local hashes and reconstruct the data you have
download the rest using any protocol
I know that I'm 6 months behind the schedule, but I just couldn't resist.
Others have said more interesting things, but I would like to point out that you need to make sure you use a good compression algorithm. That will make a world of difference.
Also I would recommend validating your assumptions as to the speed improvement possibility, make a trivial system of sending data (not worrying about loss, corruption, or other problems) and see what bandwidth you get. This will at least give you a realistic upper bound for what can be done.
Finally consider why you are taking on this task? Will the speed gains be worth it after the amount of time spent developing it?

Categories

Resources