I am trying to implement serial communications between a micro-controller and a c# windows application.
I have everything working correctly in the computer to micro-controller direction. But am having trouble working out how exactly to implement communications in the other direction.
My messages are composed of 4 bytes
B0 – Address/Name of the value being sent
B1 – High Byte
B2 – Low Byte
B3 – Checksum = Addition of bytes 0-2
To make sure complete messages are received, I have the micro-controller give up on the current message being received if there is longer than 20ms between bytes, this appears to work well and can tolerate faults in communications that may cause a loss in synchronisation.
I am not sure how I can implement this delay if at all from within a c# application, as I know you have a lot less fine control over timing.
I have seen other ASCII protocols that send a start and stop character but am not sure how to implement this when sending binary data where my values can take any value possible in the byte, and might happen to be whatever the start or stop character is.
I need to keep the micro-controller side basic as I have limited resources, and the controllers primary task requires very precise (sub us range) that converting ascii to decimal may have.
Does anybody have recommendations on how I should be implementing this from either the microprocessor or the computer side.
EDIT
I have looked at some of the other questions on here but they all seem to refer to much larger ASCII based messages.
This is a pretty bad idea. It assumes that the machine on the other end of the wire is capable of providing the same guarantee. 20 msec is no issue at all on a micro-controller. A machine that boots Linux or Windows, no way. A thread that's busy writing to the serial port can easily lose the processor for hundreds of milliseconds. A garbage collection in the case of C#.
Just don't optimize for the exceptional case, there's no point. Timeouts should be in the second range, ten times larger than the worst case you expect. Make a protocol further reliable by framing it. Always a start byte that give the receiver a chance to re-synchronize. Might as well include a length byte, you are bound to need it sooner or later. Favor a CRC over a checksum. Check out RFC916 for a recoverable protocol suggestion, albeit widely ignored. Worked well when I used it, although it needed extra work to get connection attempts reliable, you have to flush the receive buffer.
You can set the read timeout to 20ms by using the following command,
serialPort.ReadTimeout = 20;
This will make the read operation time out after 20ms, in which case you can do what you want.
Don't use ReadExisting with this timeout as it does not rely on the read timeout,
instead use Read() or readByte() and check for a Timeout Exception
incedently the same can be done with WriteTimeout even on successful writes. so take care on that.
Related
I am using a System.Net.Sockets.Socket in TCP mode to send data to some other machine. I implemented a listener to receive data formatted in a certain way and it is all working nicely. Now I am looking for ways to optimize use of bandwidth and delivery speed.
I wonder if it is worth sending a single array of bytes (or Span in the newer frameworks) instead of using a consecutive series of Send calls. I now send my encoded message in parts (head, length, tail, end, using separate Send calls).
Sure I could test this and I will but I do not have much experience with this and may be missing important considerations.
I realize there is a nowait option that may have an impact and that the socket and whatever is there lower in the stack may apply its own optimizing policy.
In would like to be able to prioritize delivery time (the time between the call to Send and reception at the other end) over bandwidth use for those messages to which it matters, and be more lenient with messages that are not time critical. But then again, I create a new socket whenever I find there is something in my queue and then use that until the queue is empty for what could be more than one message so this may not always work. Ideally I would want to be lenient so the socket can optimize payload until a time critical message hits the queue and then tell the socket to hurry until no more time critical messages are in the queue.
So my primary question is should I build my message before calling Send once (would that potentially do any good or just waste CPU cycles) and are there any caveats an experienced TCP programmer could make me aware of?
I have two PCs connected by direct Ethernet cable over 1Gbps link. One of them act as TCP Server and other act as TCP Client/s. Now I would like to achieve maximum possible network throughput between these two.
Options I tried:
Creating multiple clients on PC-1 with different port numbers, connecting to the TCP Server. The reason for creating multiple clients is to increase the network throughput but here I have an issue.
I have a buffer Queue of Events to be sent to Server. There will be multiple messages with same Event Number. The server has to acquire all the messages then sort the messages based on the Event number. Each client now dequeues the message from Concurrent Queue and sends to the server. After sent, again the client repeats the same. I have put constraint on the client side that Event-2 will not be sent until all messaged labelled with Event-1 is sent. Hence, I see the sent Event order correct. And the TCP server continuously receives from all the clients.
Now lets come to the problem:
The server is receiving the data in little random manner, like I have shown in the image. The randomness between two successive events is getting worse after some time of acquisition. I can think of this random behaviour is due to parallel worker threads being executed for IO Completion call backs.
technology used: F# Socket Async with SocketEventArgs
Solution I tried: Instead of allowing receive from all the clients at server side, I tried to poll for the next available client with pending data then it ensured the correct order but its performance is not at all comparable to the earlier approach.
I want to receive in the same order/ nearly same order (but not non-deterministic randomness) as being sent from the clients. Is there any way I can preserve the order and also maintain the better throughput? What are the best ways to achieve nearly 100% network throughput over two PCs?
As others have pointed out in the comments, a single TCP connection is likely to give you the highest throughput, if it's TCP you want to use.
You can possibly achieve slightly (really marginally) higher throughput with UDP, but then you have the hassle of recreating all the goodies TCP gives you for free.
If you want bidirectional high volume high speed throughput (as opposed to high volume just one way at a time), then it's possible one connection for each direction is easier to cope with, but I don't have that much experience with it.
Design tips
You should keep the connection open. The client will need to ask "are you still there?" at regular intervals if no other communication goes on. (On second thought, I realize that the only purpose of this is to allow quick reponse and the possiblity for the server to initiate a message transaction. So I revise it to: keep the connection open for a full transaction at least.)
Also, you should split up large messages - messages over a certain size. Keep the number of bytes you send in each chunk to a maximum round hex number, typically 8K, 16K, 32K or 64K on a local network. Experiment with sizes. The suggested max sizes has been optimal since Windows 3 at least. You need some sort of protocol with a chunck consisting of a fixed header (typically a magic number for check and resynch, a chunk number also for check and for analysis, and a total packet length) followed by the data.
You can possibly further improve throughput with compression (usually low quick compression) - it depends very much on the data, and whether you're on a fast or slow network.
Then there's this hassle that one typically runs into - problems with the Nagle algorith - and I no longer remember enough of the details there. I believe I used to overcome that by sending an acknowledgement in return for each chunk sent, and I suspect by doing that you satisfy the design requirements, and so avoid waiting for the last bytes to come in. But do google this.
I'm writing a simple chat program using sockets. When I'm sending a long message, flush the stream and a short message afterwards, the end of the long message gets appended to the short message. It looks like this:
Send "aaasdsd"
Recieve "aaasdsd"
Send "bb"
Recieve "bbasdsd"
Through debugging I've found that the Flush method, that's supposed to clear all data from the stream, does not do that. According to mdsn, it is the expected behaviour, because NetworkStream is not bufferized. How do I clear the stream in that case? I could just follow every message with an empty (consisting of \0 chars) one of the same length, but I don't think it's correct to do that, also, it would screw up some features I need.
TCP doesn't work this way. It's as simple as that.
TCP is a stream-based protocol. That means that you shouldn't ever treat it as a message-based protocol (unlike, say, UDP). If you need to send messages over TCP, you have to add your own messaging protocol on top of TCP.
What you're trying to do here is send two separate messages, and receive two separate messages on the other side. This would work fine on UDP (which is message-based), but it will not work on TCP, because TCP is a stream with no organisation.
So yeah, Flush works just fine. It's just that no matter how many times you call Flush on one side, and how many times you call individual Sends, each Receive on the other end will get as much data as can fit in its buffer, with no respect to the Sends on the other side.
The solution you've devised (almost - just separate the strings with a single \0) is actually one of the proper ways to handle this. By doing that, you're working with messages on top of the stream again. This is called message framing - it allows you to tell individual messages apart. In your case, you've added delimiters between the messages. Think about writing the same data in a file - again, you'll need some way of your own to separate the individual messages (for example, using end lines).
Another way to handle message framing is using a length prefix - before you send the string itself, send it's length. Then, when you read on the other side, you know that between the strings, there should always be a length prefix, so the reader knows when the message ends.
Yet another way isn't probably very useful for your case - you can work with fixed-length data. So a message will always be exactly 100 bytes, for example. This is very powerful when combined with pre-defined message types - so message type 1 would contain exactly two integers, representing some coördinates, for example.
In either case, though, you'll need your own buffering on the receiving end. This is because (as you've already seen) a single receive can read multiple messages at once, and at the same time, it's not guaranteed to read the whole message in a single read. Writing your own networking is actually pretty tricky - unless you're doing this to actually learn network programming, I'd recommend using some ready technology - for example, Lindgren (a nice networking library, optimized for games but works fine for general networking as well) or WCF. For a chat system, simple HTTP (especially with the bi-directional WebSockets) might be just fine as well.
EDIT:
As Damien correctly noted, there seems to be another problem with your code - you seem to be ignoring the return value of Read. The return value tells you the amount of bytes you've actually read. Since you have a fixed-size persistent buffer on the receiving side (apparently), it means that every byte after the amount you've just read will still contain the old data. To fix this, just make sure you're only working with as much bytes as Read returned. Also, since this seems to indicate you're ignoring the Read return value altogether, make sure to properly handle the case when Read returns 0 - that means the other side has gracefully shutdown its connection - and the receiving side should do the same.
We have a pretty standard TCP implementation of SocketAsyncEventArgs (no real difference to the numerous examples you can google).
We have a load testing console app (also using SocketAsyncEventArgs) that sends x many messages per second. We use thread spinning to introduce mostly accurate intervals within the 1000ms to send the message (as opposed to sending x messages as fast as possible and then waiting for the rest of the 1000ms to elapse).
The messages we send are approximately 2k in size, to which the server implementation responds (on the same socket) with a pre-allocated HTTP OK 200 response.
We would expect to be able to send 100's if not 1000's of messages per second using SocketAsyncEventArgs. We found that with a simple blocking TcpListener/TcpClient we were able to process ~150msg/s. However, even with just 50 messages per second over 20 seconds, we lose 27 of the 1000 messages on average.
This is a TCP implementation, so we of course expected to lose no messages; especially given such a low throughput.
I'm trying to avoid pasting the entire implementation (~250 lines), but code available on request if you believe it helps. My question is, what load should we expect from SAEA? Given that we preallocate separate pools for Accept/Receive/Send args which we have confirmed are never starved, why do we not receive an arg.Complete callback for each message?
NB: No socket errors are witnessed during execution
Responses to comments:
#usr: Like you, we were concerned that our implementation may have serious issues cooked in. To confirm this we took the downloadable zip from this popular Code Project example project. We adapted the load test solution to work with the new example and re-ran our tests. We experienced EXACTLY the same results using someone else's code (which is primarily why we decided to approach the SO community).
We sent 50 msg/sec for 20 seconds, both the code project example and our own code resulted in an average of 973/1000 receive operations. Please note, we took our measurements at the most rudimentary level to reduce risk of incorrect monitoring. That is, we used a static int with Interlocked.Increment on the onReceive method - onComplete is only called for asynchronous operations, onReceive is invoked both by onComplete and when !willRaiseEvent.
All operations performed on a single machine using the loopback address.
Having experienced issues with two completely different implementations, we then doubted our load test implementation. We confirmed via Wireshark that our load test project did indeed send the traffic as expected (fragmentation was present in the pcap log, but wireshark indicated the packets were reassembled as expected). My networking understanding at low levels is weaker than I'd like, I admit, but given the amount of fragmentation nowhere near matches the number of missing messages, we are for now assuming the two are not related. As I udnerstand it, fragmentation should be handled at a lower layer, and completely abstracted at our level of API calls.
#Peter,
Fair point, in a normal networking scenario such level of timing accuracy would be utterly pointless. However, the waiting is very simple to implement and wireshark confirms the timing of our messages to be as accurate as the pcap log's precision allows. Given we are only testing on loopback (the same code has been deployed to Azure cloud services also which is the intended destination for the code once it is production level, but the same if not worse results were found on A0, A1, and A8 instances), we wanted to ensure some level of throttling. The code would easily push 1000 async args in a few ms if there was no throttling, and that is not a level of stress we are aiming for.
I would agree, given it is a TCP implementation, there must be a bug in our code. Are you aware of any bugs in the linked Code Project example? Because it exhibits the same issues as our code.
#usr, as predicted the buffers did contain multiple messages. We now need to work out how it is we're going to marry messages back together (TCP guarantees sequence of delivery, but in using multiple SAEA's we lose that guarantee through threading).
The best solution is to abandon custom TCP protocols. Use HTTP and protocol buffers for example. Or, web services. For all of this there are fast and easy to use asynchronous libraries available. Assuming this is not what you want:
Define a message framing format. For example, prepend BitConvert.GetBytes(messageLengthInBytes) to each message. That way you can deconstruct the stream.
I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible).
Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets?
For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing.
Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.
Currently I'm debating between List<Byte> and StringBuilder. Any comparison of the two for this situation would be very helpful.
Read the data from the socket into a buffer. When you get the terminating character, turn it into a message and send it on its way to the rest of your code.
Also, remember that TCP is a stream, not a packet. So you should never assume that you will get everything sent at one time in a single read.
As far as buffers go, you should probably only need one per connection at most. I'd probably start with the max size that you reasonably expect to receive, and if that fills, create a new buffer of a larger size - a typical strategy is to double the size when you run out to avoid churning through too many allocations.
If you have multiple incoming connections, you may want to do something like create a pool of buffers, and just return "big" ones to the pool when done with them.
You could just use a List<byte> as your buffer, so the .NET framework takes care of automatically expanding it as needed. When you find a null terminator you can use List.RemoveRange() to remove that message from the buffer and pass it to the next layer up.
You'd probably want to add a check and throw an exception if it exceeds a certain length, rather than just wait until the client runs out of memory.
(This is very similar to Ben S's answer, but I think a byte array is a bit more robust than a StringBuilder in the face of encoding issues. Decoding bytes to a string is best done higher up, once you have a complete message.)
I would just use a StringBuilder and read in one character at a time, copying and emptying the builder whenever I hit a null terminator.
I wrote this answer regarding Java sockets but the concept is the same.
What's the best way to monitor a socket for new data and then process that data?