Timeouts in C# serial port - c#

I am using the C# Serial port library for communicating with a sensor and PC .
I am frequently getting timeouts with the SerialPort.Read() method even though there is data in there. I used serial sniffers to check that I am receiving all the packet at the port but some how .NET does not pick all of them and times out. I am reading bytes and the bytes I am receiving is 2112 less than the serial port buffer size. I tried multiple things and now thinking of using native C/C++ and calling it in C#. Can someone share more thoughts or used native C/C++ code in C#.

running at baud rates 460800 to 921600
Those are pretty strange baud rates, rather high. Clearly you are not using the DataReceived event so it gets rather critical how often you call the Read() method. Take some time off doing something else, including Windows thinking that something more important needs to be done and context-switches away from your thread, and the receive buffer will quickly overflow. Not implementing the SerialPort.ErrorReceived event is a standard mistake so you just don't see those overflows, all you see is missing data.
Writing this code in C++ is very unlikely to bring relieve. There's only one api for serial ports, SerialPort is just a thin wrapper and uses the winapi functions like your C++ code would.
So take all of the following steps:
Implement the ErrorReceived event so you know that overflows occur
Favor using the DataReceived event so you don't depend on calling Read() frequently enough
Set the ReadBufferSize to a nice big number so the driver can take up the slack
Set the Handshake property so the driver can tell the device to stop sending when the buffer is full
Check if you can implement a protocol so the device doesn't just fire-hose the machine
Lower the baud rate if it still isn't reliable enough.

Related

SharpSNMP max-repetitions increase causes buffer size exception through GPRS

I am trying to send SNMP requests to a remote location.
I am using the SharpSNMP 8.5.0 library and the Snmp.BulkWalk example from a code project post ( here ).
In the example, they use 10 as max-repetitions and using sniffing software I noticed that is creating multiple datagram packets to make the walk within in the subtree. Actually I am getting 120 packets results back every time. So I decided to try a higher max-repetitions number and I noticed that the packets number is going down, actually I can get all the data in one packet. Now I have another problem: the remote device is using GPRS when I snmpwalk on the device from the server using GPRS, I get a timeout or a buffer out of size error. When I run the same solution on my local PC and I access the remote device from my router(no GPRS involved) I don't get any errors and get all the data!
Can someone explain this behavior? Does it have to do with a GPRS limitation? GPRS is unreliable? Or is it a network limitation on the server?
(The MTU in the server is 1500). Does anyone have an experience on the best practices and the optimal packet size that can send through SNMP-UDP datagram packets?
Though I am the author of that library, I could not answer the GPRS part, as I am not a mobile network expert.
What I could answer is the packet number part, which is relatively simple if you check out the definition of "max-repititions",
https://www.webnms.com/snmp/help/snmpapi/snmpv3/v2c/maxrepetition.html
By setting a larger value to this parameter, a single packet can contain more results, and obviously less packets are needed.
I used 10 in that Code Project article, because it was just an example. You might see from the link above that other libraries might use 50 as the default.
Regarding best practices for SNMP packet size, I've always been told that you should avoid exceeding the network MTU. In other words, set the max-repetitions so that the Ethernet frames don't regularly exceed 1500 bytes. (Of course, this assumes that the size of your table cells is predicable.)
While using larger packets should work on most well-configured networks, it's advisable to avoid having fragmented packets on the network. Perhaps packet re-assembly might create larger overhead in the networking equipment. And if you're going to fragment the PDUs over several packets anyway, the drawback of having to do a few more back-and-forth requests is not that bad.
For example, Cisco equipment seems to follow this best practice, and it's recommended in a Microsoft article.
(BTW, next time you have two separate questions, consider posting them as two questions!)

Need of checksum in serial communication

I am currently working on a project involving a Serial COM from PC ( USB TO SERIAL application coded in C# ) to an embedded platform (STM32F4).
I saw that in some cases it's mandatory to have a checksum in a communication frame.
The Communication configuration:
Baud-rate = 115200,
No Parity bit,
One StopBit,
No Handshake,
Frame Length : 16 bytes
Is it worth adding a checksum in my application? What are the reasons why i should (not) have this checksum?
Thank you for your answer.
Yes you must have a checksum. The only acceptable non-hobbyist solution is a proper checksum based on CRC. Most common industry standard is "CRC-16-CCITT" (polynomal 0x1021). This will catch any single-bit error, most double-bit errors and some burst errors.
Even though you'll only use RS-232 in an office environment(?), any EMI caused by crappy consumer electronics could cause glitches and incorrect data. There is a lot of such crappy electronics around: for example it is not all that uncommon for the electronics in your PC to have poor EMC performance. In particular, there are countless of USB-to-serial adapters with downright awful quality.
The UART hardware in itself has no error detection worth mentioning: it is ancient 1960s technology. On the hardware level, it only checks data integrity based on start and stop bits, and will miss out any errors in between. (Parity checking is equally poor).
Alternatively, you could perhaps get an USB to RS-485 adapter instead and use RS-485, which is far more rugged as it has differential signals. But that requires that you have RS-485 transceivers on the target side too.
It is customary to have a checksum in order to verify the correctness of the data although serial communication is relatively reliable. You will definitely need a sync made up of at least two bytes that will always be assigned a specific value which you don't expect to appear in your data. The sync is used in the receiving side to find the start of each message sent because it is a stream communication and not a packet based communication.

Sending images from a C++ process to a C# process

I am trying to send over image data from a compiled C++ process to a compiled C# process. The C++ process is accessing the webcam and doing some processing on the image. The image is represented by an 2D array of pixels with each pixel value being an 8 bit value (0-255) which is the gray-scale value of that pixel.
The image size is 640 by 480.
The C# application does some more processing and displays this image onto the screen. The processes are both running at the same time on my laptop (Windows 7 OS) but I cannot make a single process that does all the steps which is why I need my C++ and C# code to communicate.
I was wondering what is the best way to do this? I read about writing a UDP or TCP server in the C# part and a client on the C++ part, I can then send over the image data as a datagram. I was wondering if this is the best way and if it is whether UDP or TCP would be better?
EDIT: The C++ process is unmanaged C++, I don't have the option to run it as a managed DLL. Could I use named pipes to send over the image?
Finally is UDP guaranteed in order if it is communicating locally? I realise the image would be over the limit for UDP but if it is inorder I should be able to split the images up to send over.
Interprocess communication can be done via sockets or pipes.
With sockets(TCP and UDP) you're essentially sending the data over the internet to yourself. Luckily since your comp knows itself, the data shouldn't leave the comp so this should be pretty quick. TCP is guaranteed to be in order and has a bunch of other nice features while UDP is pretty much slap some headers onto the data and hope for the best. For this application TCP should be fine. UDP adds unneeded complexity.
Pipes are the other way to have two processes to communicate. You basically have the C++ or C# process create a pipe and start the other process. You just use the pipe like a file: write to and read from it. This can be done in C/C++ using a combination of the pipe, fork, and exec functions or simply using the popen function. C# probably has similar functions.
I suggest using a pipe using _popen, (popen for windows) and writing a series of ints to the pipe and reading it from the other side. This is probably the easiest way... besides using one language of course...
If you are writing both of the programs, you can compile C++ one as DLL, and call function that returns an array or some structure from your C# program with DllImport Attribute in System.Runtime.InteropServices namespace.
Why can't you do it in the same process? Is it because you need to mix C# and C++? In that case C++/CLI can be used as a bridge between the environments to have both C# code for the .NET CLR and C++ code compiled natively in one process.
If you really need two processes there are several options when running on a local machine, but a small TCP-based service is probably best. The size of each image will be 307kb which is larger than the 65kb limit of UDP.
I was wondering if this is the best way and if it is whether UDP or TCP would be better?
You usually resort to UDP as a speed optimization when TCP is not fast enough and packet loss is inconvenient rather than when it can't be handled. If you can't handle losing part of the image in the transmission I doubt you can resort to UDP.
Moreover, UDP is unlikely to give a performance boost in your case since you'll be using the loopback interface. This means that all TCP packets are likely to arrive in order and without loss, making TCP extra cheap.
If you write your application using TCP and in the future, for some reason, you decide the processes no longer run on the same machine, you won't have to change your code.
Finally, TCP sockets are just easier to use, so unless TCP is not fast enough on your machine, I would stick with TCP sockets.
is UDP guaranteed in order if it is communicating locally?
AFAIK, this behavior is not guaranteed. It is very likely to work most of the time, but unless you can find a quote from relevant documentation, I wouldn't count on this.
Could I use named pipes to send over the image?
Yes, named pipes are very similar to sockets, but they're known to be slow.
Once way of doing it apart from sockets would be to save the image data onto the disk from your C++ application and read it off the disk in your C# application. Of course you will need to make sure some sort of read/write synchronisation so that the file is not read before its fully written.
Or you finally decide to use UDP or TCP, try using RTP. RTP uses UDP with an extra layer of time stamps, sequence numbering to ensure correct order of data delivery. You should be able to find C++ and C# implementations of the protocol. Specifically to mention is that you can send images over a RTP/MJPEG stream if your application is producing JPEG images.
Just move to completely managed code :p (To keep it all in the same process)
https://net7mma.codeplex.com/SourceControl/latest has a C# RtspServer and RtpClient

Serial data logging

I have a device connected to my computer that sends serial data to the computer every 5 mins. I want to write a basic program to capture this serial data every 5 mins and put it into a database. I was hoping to use C# because I have used C# with databases before and found it quite easy.
Can anybody offer me any advice on how I might do this, I really have no idea where to start and I know in theory it sounds easy but when I started it I actually found it really hard.
Using C#, you can use the System.IO.Ports namespace to communicate over the serial ports - there's a nice article here.
Alternatively, you can use Python and the pySerial module. I've written an app to communicate over the serial port using pySerial - it's quite easy to use, and can run on many different operating systems including OSX and Windows (I'm assuming you're using Windows). Python also has built-in support for SQLite.
The problem with capturing data on a serial port is that serial ports aren't thread-safe, so if there is more than one listener, data will be corrupted.
If you are absolutely sure that you're the only one listening for data on this port, .NET has a built-in wrapper, System.IO.Ports.SerialPort, which you can use to connect to COM1, COM2, etc. You'll need to know the rate in bits/sec at which this device sends data (its baud rate), its error-checking (parity) protocol, and the format of the data it is sending (you'll get it as a byte array, which you must convert byte-by-byte into data you can work with). Then, your program should be able to open the port and listen for DataReceived events with a handler that will read and digest the data. Again, it's VERY important that you never have two threads trying to read at once; the easiest way is to set a volatile boolean indicating that a handler is reading data; if another handler is ever spawned while a previous one is still running, the first thing the new one should do is read that value, and since it's set, exit the new handler immediately.

c# .NET Serial Driver performance

My application needs to communicate with a embedded device which is about 1MHz clock speed, through serial communication. In the mid of that process, we found that we were missing some data from the device.
So, I began testing the performance of the serial driver that I used. The device keeps on sending raw data with counter increased for each packet at a Baud Rate of 115200 bits/sec. When connected to hyper terminal and ran whole night we found that it is not missing any data.
But when I used the c# serial driver with DataReceived handler and a parser written to find whether packets missed or not, we encountered situations like
1) Missing of packet
2) Buffer over run error.
I am not able to come to a conclusion. I want all of your views on the data available.
Is it a test which stretches the boundaries of any Serial Device drivers ? Or the way .NET serial driver written is not up to the mark?
The way I have implemented is very simple. I have just used a DataReceived handler and updating the data to a List, which inturn is used by a parser running in a different thread with highest priority. The functionality done with in DataReceived hanlder is nothing but adding the received data to the list.
Thanks in Advance
The way I have implemented is very simple. I have just used a DataReceived handler and updating the data to a List, which inturn is used by a parser running in a different thread with highest priority.
This is probably your problem. The parser is likely CPU-bound, which means sticking it on Highest priority means that it will consume the vast majority of CPU cycles until it runs out of stuff to parse, and so your DataReceived thread is starved of execution and ends up missing stuff.
In short, don't fiddle around with priority unless you know what you're doing. Set the priority back to normal and you'll get better results.
Using a List to communicate between two threads is a bad idea since it's not thread safe (unless you have locks around it - do you??). Use a ConcurrentQueue<T> if you are using .NET4.
Also, as Anon points out, high priority threads are rarely the right answer. In this case, your parser should, if anything be running at below normal priority since it's job is just to consume the Queue without affecting any other thread that might be doing I/O. You can keep an eye on the Queue length in the parser thread and issue warnings if it's getting left too far behind.

Categories

Resources