c# .NET Serial Driver performance - c#

My application needs to communicate with a embedded device which is about 1MHz clock speed, through serial communication. In the mid of that process, we found that we were missing some data from the device.
So, I began testing the performance of the serial driver that I used. The device keeps on sending raw data with counter increased for each packet at a Baud Rate of 115200 bits/sec. When connected to hyper terminal and ran whole night we found that it is not missing any data.
But when I used the c# serial driver with DataReceived handler and a parser written to find whether packets missed or not, we encountered situations like
1) Missing of packet
2) Buffer over run error.
I am not able to come to a conclusion. I want all of your views on the data available.
Is it a test which stretches the boundaries of any Serial Device drivers ? Or the way .NET serial driver written is not up to the mark?
The way I have implemented is very simple. I have just used a DataReceived handler and updating the data to a List, which inturn is used by a parser running in a different thread with highest priority. The functionality done with in DataReceived hanlder is nothing but adding the received data to the list.
Thanks in Advance

The way I have implemented is very simple. I have just used a DataReceived handler and updating the data to a List, which inturn is used by a parser running in a different thread with highest priority.
This is probably your problem. The parser is likely CPU-bound, which means sticking it on Highest priority means that it will consume the vast majority of CPU cycles until it runs out of stuff to parse, and so your DataReceived thread is starved of execution and ends up missing stuff.
In short, don't fiddle around with priority unless you know what you're doing. Set the priority back to normal and you'll get better results.

Using a List to communicate between two threads is a bad idea since it's not thread safe (unless you have locks around it - do you??). Use a ConcurrentQueue<T> if you are using .NET4.
Also, as Anon points out, high priority threads are rarely the right answer. In this case, your parser should, if anything be running at below normal priority since it's job is just to consume the Queue without affecting any other thread that might be doing I/O. You can keep an eye on the Queue length in the parser thread and issue warnings if it's getting left too far behind.

Related

High-performance TCP Socket programming in .NET C#

I know this topic is already asked sometimes, and I have read almost all threads and comments, but I'm still not finding the answer to my problem.
I'm working on a high-performance network library that must have TCP server and client, has to be able to accept even 30000+ connections, and the throughput has to be as high as possible.
I know very well I have to use async methods, and I have already implemented all kinds of solutions that I have found and tested them.
In my benchmarking, only the minimal code was used to avoid any overhead in the scope, I have used profiling to minimize the CPU load, there is no more room for simple optimization, on the receiving socket the buffer data was always read, counted and discarded to avoid socket buffer fill completely.
The case is very simple, one TCP Socket listens on localhost, another TCP Socket connects to the listening socket (from the same program, on the same machine oc.), then one infinite loop starts to send 256kB sized packets with the client socket to the server socket.
A timer with 1000ms interval prints a byte counter from both sockets to the console to make the bandwidth visible then resets them for the next measurement.
I've realized the sweet-spot for packet size is 256kB and the socket's buffer size is 64kB to have the maximum throughput.
With the async/await type methods I could reach
~370MB/s (~3.2gbps) on Windows, ~680MB/s (~5.8gbps) on Linux with mono
With the BeginReceive/EndReceive/BeginSend/EndSend type methods I could reach
~580MB/s (~5.0gbps) on Windows, ~9GB/s (~77.3gbps) on Linux with mono
With the SocketAsyncEventArgs/ReceiveAsync/SendAsync type methods I could reach
~1.4GB/s (~12gbps) on Windows, ~1.1GB/s (~9.4gbps) on Linux with mono
Problems are the following:
async/await methods were the slowest, so I will not work with them
BeginReceive/EndReceive methods started new async thread together with the BeginAccept/EndAccept methods, under Linux/mono every new instance of the socket was extremely slow (when there was no more thread in the ThreadPool mono started up new threads, but to create 25 instance of connections did take about 5 mins, creating 50 connections was impossible (program just stopped doing anything after ~30 connections).
Changing the ThreadPool size did not help at all, and I would not change it (it was just a debug move)
The best solution so far is SocketAsyncEventArgs, and that makes the highest throughput on Windows, but in Linux/mono it is slower than the Windows, and it was the opposite before.
I've benchmarked both my Windows and Linux machine with iperf,
Windows machine produced ~1GB/s (~8.58gbps), Linux machine produced ~8.5GB/s (~73.0gbps)
The weird thing is iperf could make a weaker result than my application, but on Linux, it is much higher.
First of all, I would like to know if the results are normal, or can I get better results with a different solution?
If I decide to use the BeginReceive/EndReceive methods (they produced relatively the highest result on Linux/mono) then how can I fix the threading problem, to make the connection instance creating fast, and eliminate the stalled state after creating multiple instances?
I continue making further benchmarks and will share the results if there is any new.
================================= UPDATE ==================================
I promised code snippets, but after many hours of experimenting the overall code is kind of a mess, so I would just share my experience in case it can help someone.
I had to realize under Window 7 the loopback device is slow, could not get higher result than 1GB/s with iperf or NTttcp, only Windows 8 and newer versions have fast loopback, so I don't care anymore about Windows results until I can test on newer version. SIO_LOOPBACK_FAST_PATH should be enabled via Socket.IOControl, but it throws exception on Windows 7.
It turned out the most powerful solution is the Completion event based SocketAsyncEventArgs implementation both on Windows and Linux/Mono. Creating a few thousand instances of the clients never messed up the ThreadPool, the program did not stop suddenly as I mentioned above. This implementation is very nice to the threading.
Creating 10 connections to the listening socket and feeding data from 10 separate thread from the ThreadPool with the clients together could produce ~2GB/s data traffic on Windows, and ~6GB/s on Linux/Mono.
Increasing the client connection count did not improve the overall throughput, but the total traffic became distributed among the connections, this might be because the CPU load was 100% on all cores/threads even with 5, 10 or 200 clients.
I think overall performance is not bad, 100 clients could produce around ~500mbit/s traffic each. (Of course this is measured in local connections, real life scenario on network would be different.)
The only observation I would share: experimenting with both the Socket in/out buffer sizes and with the program read/write buffer sizes/loop cycles highly affected the performance and very differently on Windows and on Linux/Mono.
On Windows the best performance has been reached with 128kB socket-receive, 32kB socket-send, 16kB program-read and 64kB program-write buffers.
On Linux the previous settings produced very weak performance, but 512kB socket-receive and -send both, 256kB program-read and 128kB program-write buffer sizes worked the best.
Now my only problem is if I try create 10000 connecting sockets, after around 7005 it just stops creating the instances, does not throw any exceptions, and the program is running as there was no any problem, but I don't know how can it quit from a specific for loop without break, but it does.
Any help would be appreciated regarding anything I was talking about!
Because this question gets a lot of views I decided to post an "answer", but technically this isn't an answer, but my final conclusion for now, so I will mark it as answer.
About the approaches:
The async/await functions tend to produce awaitable async Tasks assigned to the TaskScheduler of the dotnet runtime, so having thousands of simultaneous connections, therefore thousands or reading/writing operations will start up thousands of Tasks. As far as I know this creates thousands of StateMachines stored in ram and countless context switchings in the threads they are assigned to, resulting in very high CPU overhead. With a few connections/async calls it is better balanced, but as the awaitable Task count grows it gets slow exponentially.
The BeginReceive/EndReceive/BeginSend/EndSend socket methods are technically async methods with no awaitable Tasks, but with callbacks on the end of the call, which actually optimizes more the multithreading, but still the limitation of the dotnet design of these socket methods are poor in my opinion, but for simple solutions (or limited count of connections) it is the way to go.
The SocketAsyncEventArgs/ReceiveAsync/SendAsync type of socket implementation is the best on Windows for a reason. It utilizes the Windows IOCP in the background to achieve the fastest async socket calls and use the Overlapped I/O and a special socket mode. This solution is the "simplest" and fastest under Windows. But under mono/linux, it never will be that fast, because mono emulates the Windows IOCP by using linux epoll, which actually is much faster than IOCP, but it has to emulate the IOCP to achieve dotnet compatibility, this causes some overhead.
About buffer sizes:
There are countless ways to handle data on sockets. Reading is straightforward, data arrives, You know the length of it, You just copy bytes from the socket buffer to Your application and process it.
Sending data is a bit different.
You can pass Your complete data to the socket and it will cut it to chunks, copy the chucks to the socket buffer until there is no more to send and the sending method of the socket will return when all data is sent (or when error happens).
You can take Your data, cut it to chunks and call the socket send method with a chunk, and when it returns then send the next chunk until there is no more.
In any cases You should consider what socket buffer size You should choose. If You are sending large amount of data, then the bigger the buffer is, the less chunks has to be sent, therefore less calls in Your (or in the socket's internal) loop has to be called, less memory copy, less overhead.
But allocating large socket buffers and program data buffers will result in large memory usage, especially if You are having thousands of connections, and allocating (and freeing up) large memory multiple times is always expensive.
On sending side 1-2-4-8kB socket buffer size is ideal for most cases, but if You are preparing to send large files (over few MB) regularly then 16-32-64kB buffer size is the way to go. Over 64kB there is usually no point to go.
But this has only advantage if the receiver side has relatively large receiving buffers too.
Usually over the internet connections (not local network) no point to get over 32kB, even 16kB is ideal.
Going under 4-8kB can result in exponentially incremented call count in the reading/writing loop, causing large CPU load and slow data processing in the application.
Go under 4kB only if You know Your messages will usually be smaller than 4kB, or just very rarely over 4KB.
My conclusion:
Regarding my experiments built-in socket class/methods/solutions in dotnet are OK, but not efficient at all. My simple linux C test programs using non-blocking sockets could overperform the fastest and "high-performance" solution of dotnet sockets (SocketAsyncEventArgs).
This does not mean it is impossible to have fast socket programming in dotnet, but under Windows I had to make my own implementation of Windows IOCP by directly communicating with the Windows Kernel via InteropServices/Marshaling, directly calling Winsock2 methods, using a lot of unsafe codes to pass the context structs of my connections as pointers between my classes/calls, creating my own ThreadPool, creating IO event handler threads, creating my own TaskScheduler to limit the count of simultaneous async calls to avoid pointlessly much context switches.
This was a lot of job with a lot of research, experiment, and testing. If You want to do it on Your own, do it only if You really think it worth it. Mixing unsafe/unmanage code with managed code is a pain in the ass, but the end it worth it, because with this solution I could reach with my own http server about 36000 http request/sec on a 1gbit lan, on Windows 7, with an i7 4790.
This is such a high performance that I never could reach with dotnet built-in sockets.
When running my dotnet server on an i9 7900X on Windows 10, connected to a 4c/8t Intel Atom NAS on Linux, via 10gbit lan, I can use the complete bandwidth (therefore copying data with 1GB/s) no matter if I have only 1 or 10000 simultaneous connections.
My socket library also detects if the code is running on linux, and then instead of Windows IOCP (obviously) it is using linux kernel calls via InteropServices/Marshalling to create, use sockets, and handle the socket events directly with linux epoll, managed to max out the performance of the test machines.
Design tip:
As it turned out it is difficult to design a networking library from scatch, especially one, that is likely very universal for all purposes. You have to design it to have many settings, or especially to the task You need.
This means finding the proper socket buffer sizes, the I/O processing thread count, the Worker thread count, the allowed async task count, these all has to be tuned to the machine the application running on and to the connection count, and data type You want to transfer through the network. This is why the built-in sockets are not performing that good, because they must be universal, and they do not let You set these parameters.
In my case assingning more than 2 dedicated threads to I/O event processing actually makes the overall performance worse, because using only 2 RSS Queues, and causing more context switching than what is ideal.
Choosing wrong buffer sizes will result in performance loss.
Always benchmark different implementations for the simulated task You need to find out which solution or setting is the best.
Different settings may produce different performance results on different machines and/or operating systems!
Mono vs Dotnet Core:
Since I've programmed my socket library in a FW/Core compatible way I could test them under linux with mono, and with core native compilation. Most interestingly I could not observe any remarkable performance differences, both were fast, but of course leaving mono and compiling in core should be the way to go.
Bonus performance tip:
If Your network card is capable of RSS (Receive Side Scaling) then enable it in Windows in the network device settings in the advanced properties, and set the RSS Queue from 1 to as high you can/as high is the best for your performance.
If it is supported by Your network card then it is usually set to 1, this assigns the network event to process only by one CPU core by the kernel. If You can increment this queue count to higher numbers then it will distribute the network events between more CPU cores, and will result in much better performance.
In linux it is also possible to set this up, but in different ways, better to search for Your linux distro/lan driver information.
I hope my experience will help some of You!
I had the same problem. You should take a look into:
NetCoreServer
Every thread in the .NET clr threadpool can handle one task at one time. So to handle more async connects/reads etc., you have to change the threadpool size by using:
ThreadPool.SetMinThreads(Int32, Int32)
Using EAP (event based asynchronous pattern) is the way to go on Windows. I would use it on Linux too because of the problems you mentioned and take the performance plunge.
The best would be io completion ports on Windows, but they are not portable.
PS: when it comes to serialize objects, you are highly encouraged to use protobuf-net. It binary serializes objects up to 10x times faster than the .NET binary serializer and saves a little space too!

Preventing a bottleneck in devicecommunication

I've got quite an abstract question. I'm working on a project that requires constant device communication. I'm integrating multiple devices onto an external processing unit with a touchpanel to execute certain methods. I.e. the "start videocall" button on the touchpanel activates a relay, turns a display-device, camera-device and microphone-device on, etc.
On the flipside, I'm also trying to monitor these devices. What status do they currently have? Are they enabled/disabled ? What input is the display device currently on?
So far, I've come up with two solutions to prevent a bottleneck in the communication where I'm constantly polling (i.e. every two to five seconds to keep an acurate and up-to-date status) the on-state and input-state of the display-device.
Make use of threading so I can enqueue the different commands and execute them async. By also reading the response async, all communication should be nicely spaced out but I'd have a very "busy" communication line, taking it's toll on the processing unit.
With the help of events have the display-device notify the processor of it's changed status. This would take a lot of stress off of the communication line, but I feel like this is very easily disrupted. If the device doesn't throw it's events correctly (or the events are missed out on) the monitored state does not correspond with the actual state.
I'm curious if there are other ways of going about this issue. As of now, I'm leaning towards the second one because it stresses the processing unit a whole lot less, I just feel like I should be building in a lot of safeguards to prevent an inacurate representation of the actual device-states.
The project runs in C# on .Net 3.5.
Polling works, but it isn't fun or optimal. Reactive is best but as you've mentioned there may be a hiccup insuring your still listening to to the device and not just standing by for nothing. In this situation it makes since to optimize both processes. Poll when you're waiting or haven't heard a response in so long and listen when your polling returns good info, passing the polling.
That said, you shouldn't worry about taxing the unit too much with polling on various threads. This sounds like a purpose device so as long as you're not running it hot or stressing it to max all the time then using your resources are perfectly fine.

Why do I stop receiving OSC messages after a while on Mac?

I'm working on a Unity game that receives OSC messages from the Muse EEG headset. I've tried two 3rd party C# libraries to handle the OSC communication, UnityOSC and unity-OSC-receiver. Both implement the OSC communication with an underlying System.Net.Sockets.UdpClient. Everything is running smoothly on Windows, but on OSX, after a while, I just stop receiving messages every time. No exceptions or error messages, no indication of what went wrong at all, just silence.
My application roughly works as follows:
Start a thread that spawns a process that runs Muse-IO. This makes the headset start sending messages. After starting the process, this thread is just chilling on process.WaitforExit()
Another thread runs a while loop - not in MonoBehavior.Update(), that's not fast enough - that keeps receiving and processing OSC messages. In both libraries, this essentially boils down to calling UdpClient.Receive()
Game uses the processed messages in the normal Unity update cycle.
Some 120 to 140 seconds after the connection is initialized, the stream of messages just stops, and so far I haven't been able to figure out why. The connection indicator light on the headset stays on, but nothing indicates it's actually still sending data.
Things I've ruled out:
It's not because the number of messages or the size of the messages. If I modify the command to the headset to only send some categories of messages, cutting the total in half (from about 600/s to 300/s), the timeout still happens at the same time.
It's not the OSC library. I get the exact same results with both OSC libraries.
It's not the firewall. The firewall is off.
It's probably not the port being used by something else. I tried different ports with the same result.
It doesn't appear to be Muse's OSX driver. When I use their GUI to visualize the incoming data, it keeps receiving data for as long as I want.
I suspect that Mono, Unity or OSX might be shutting down (garbage-collecting?) the Muse-IO process or thread, because the time before the problem occurs seems to be pretty much constant regardless of what I try. But I'm unsure how to further diagnose, let alone fix this now. Any clues, suggestions or amazing solutions would be most welcome.
I found the cause.
After spawning the I/O process, the thread would do
print("Process started!");
process.PriorityClass = ProcessPriorityClass.High;
process.WaitforExit();
In hindsight, that print statement is really poorly placed, oh well. It worked fine on Windows. Changing process priority only requires admin privilege if you're increasing it to Realtime, according to the docs. Not so on Mac though. apparently setting it to High also requires elevated rights on OSX. The resulting exception was silent/undetected/uncaught because it happens outside the main thread.
Then, several minutes later, it seems the thread is garbage collected, including its child process, even though that's still running. That delay really threw me off, making me look for the cause in all the wrong places.
Lessons learned:
Be more careful with possible exceptions when multithreading,
Don't mess with process priority if you don't absolutely have to,
And never trust the docs.

Timeouts in C# serial port

I am using the C# Serial port library for communicating with a sensor and PC .
I am frequently getting timeouts with the SerialPort.Read() method even though there is data in there. I used serial sniffers to check that I am receiving all the packet at the port but some how .NET does not pick all of them and times out. I am reading bytes and the bytes I am receiving is 2112 less than the serial port buffer size. I tried multiple things and now thinking of using native C/C++ and calling it in C#. Can someone share more thoughts or used native C/C++ code in C#.
running at baud rates 460800 to 921600
Those are pretty strange baud rates, rather high. Clearly you are not using the DataReceived event so it gets rather critical how often you call the Read() method. Take some time off doing something else, including Windows thinking that something more important needs to be done and context-switches away from your thread, and the receive buffer will quickly overflow. Not implementing the SerialPort.ErrorReceived event is a standard mistake so you just don't see those overflows, all you see is missing data.
Writing this code in C++ is very unlikely to bring relieve. There's only one api for serial ports, SerialPort is just a thin wrapper and uses the winapi functions like your C++ code would.
So take all of the following steps:
Implement the ErrorReceived event so you know that overflows occur
Favor using the DataReceived event so you don't depend on calling Read() frequently enough
Set the ReadBufferSize to a nice big number so the driver can take up the slack
Set the Handshake property so the driver can tell the device to stop sending when the buffer is full
Check if you can implement a protocol so the device doesn't just fire-hose the machine
Lower the baud rate if it still isn't reliable enough.

Buffer pool management using C#

We need to develop some kind of buffer management for an application we are developing using C#.
Essentially, the application receives messages from devices as and when they come in (there could be many in a short space of time). We need to queue them up in some kind of buffer pool so that we can process them in a managed fashion.
We were thinking of allocating a block of memory in 256 byte chunks (all messages are less than that) and then using buffer pool management to have a pool of available buffers that can be used for incoming messages and a pool of buffers ready to be processed.
So the flow would be "Get a buffer" (process it) "Release buffer" or "Leave it in the pool". We would also need to know when the buffer was filling up.
Potentially, we would also need a way to "peek" into the buffers to see what the highest priority buffer in the pool is rather than always getting the next buffer.
Is there already support for this in .NET or is there some open source code that we could use?
C# sharps memory management is actually quite good, so instead of having a pool of buffers, you could just allocate exactly what you need and stick it into a queue. Once you are done with buffer just let the garbage collector handle it.
One other option (knowing only very little about your application), is to process the messages minimally as you get them, and turn them into full fledged objects (with priorities and all), then your queue could prioritize them just by investigating the correct set of attributes or methods.
If your messages come in too fast even for minimal processing you could have a two queue system. One is just a queue of unprocessed buffers, and the next queue is the queue of message objects built from the buffers.
I hope this helps.
#grieve: Networking is native, meaning that when buffers are used the receive/send data on the network, they are pinned in memory. see my comments below for elaboration.
Why wouldn't you just receive the messages, create a DeviceMessage (for lack of a better name) object, and put that object into a Queue ? If the prioritization is important, implement a PriorityQueue class that handles that automatically (by placing the DeviceMessage objects in priority order as they're inserted into the queue). Seems like a more OO approach, and would simplify maintenance over time with regards to the prioritization.
I know this is an old post, but I think you should take a look at the memory pool implemented in the ILNumerics project. I think they did exactly what you need and it is a very nice piece of code.
Download the code at http://ilnumerics.net/ and take a look at the file ILMemoryPool.cs
I'm doing something similar. I have messages coming in on MTA threads that need to be serviced on STA threads.
I used a BlockingCollection (part of the parallel fx extensions) that is monitored by several STA threads (configurable, but defaults to xr * the number of cores). Each thread tries to pop a message off the queue. They either time out and try again or successfully pop a message off and service it.
I've got it wired with perfmon counters to keep track of idle time, job lengths, incoming messages, etc, which can be used to tweak the queue's settings.
You'd have to implement a custom collection, or perhaps extend BC, to implement queue item priorities.
One of the reasons why I implemented it this way is that, as I understand it, queueing theory generally favors single-line, multiple-servers (why do I feel like I'm going to catch crap about that?).

Categories

Resources