Com port read on c# - c#

Please tell me how to read data from com port in c #, if the data is received in bytes, but of variable length, that is, the answer may be a byte array 20 and 50, that is the main question is, how do you know that the device stopped responding?

The most important part is defining the protocol bits used. You should have both start and stop bits that will tell your SerialPort object when to stop reading. Usually you don't have to care after this since your callback function will contain the data in an array.
http://msmvps.com/blogs/coad/archive/2005/03/23/SerialPort-_2800_RS_2D00_232-Serial-COM-Port_2900_-in-C_2300_-.NET.aspx

You don't. COM ports are a bit like TCP - they're a streaming service - they only transfer 7 or 8 bits at a time, (depending on how you set the port up, normally 8 bits).
If you want to send anything more complex than a byte, you need to build a protocol on top. If your data is text, a CR or null at the end will often do. If it's values in the whole set of bytes 0-255, then you need a more complex protocol to ensure that the framing of the data-units is received correctly. Maybee your requirements can be met by a simple timeout, eg 'if no chars received for 500ms, that's the end of the data unit', but such timeout-protocols are obviously low performance and subject to failures:(
Rgds,
Martin

Related

Read from serial port with timeout between bytes in .NET

I would like to read some data from a serial port. However, the packets of data are not well defined - there is no fixed length or end character. What I do to solve this is I wait until the serial port stops sending data for a short time, say 50ms, and then I say one transmission is complete. In Python, I can use the inter_byte_timeout feature of pyserial for this. I am confident the device will send all bytes in one go, so this works.
How do I do it in C#? I have been using System.IO.Ports.SerialPort, but I am open to other solutions.
What doesn't work is the normal ReadTimeout property. That only waits until the first byte. Read will then return an arbitrary amount of data. I also can't use ReceivedBytesThreshold, since I don't know how many bytes it will be in total.

Parsing data from a network stream?

Recently I started working with sockets. I realized that when reading from a network stream, you can not know how much data is coming in. So either you know in advance how many bytes have to be recieved or you know which bytes.
Since I am currently trying to implement a C# WebSocket server I need to process HTTP requests. A HTTP request can have arbitrary length, so knowing in advance how many bytes is out of the question. But a HTTP request always has a certain format. It starts with the request-line, followed by zero or more headers, etc. So with all this information it should be simple, right?
Nope.
One approach I came up with was reading all data until a specific sequence of bytes was recognized. The StreamReader class has the ReadLine method which, I believe, works like this. For HTTP a reasonable delimiter would be the empty line separating the message head from the body.
The obvious problem here is the requirement of a (preferrably short) termination sequence, like a line break. Even the HTTP specification suggests that these two adjacent CRLFs are not a good choice, since they could also occur at the beginning of the message. And after all, two CRLFs are not a simple delimiter anyways.
So expanding the method to arbitrary type-3 grammars, I concluded the best choice for parsing the data is a finite state machine. I can feed the data to the machine byte after byte, just as I am reading it from the network stream. And as soon as the machine accepts the input I can stop reading data. Also, the FSM could immediately capture the significant tokens.
But is this really the best solution? Reading byte after byte and validating it with a custom parser seems tedious and expensive. And the FSM would be either slow or quite ugly. So...
How do you process data from a network stream when the form is known but not the size?
How can classes like the HttpListener parse the messages and be fast at it too?
Did I miss something here? How would this usually be done?
HttpListener and other such components can parse the messages because the format is deterministic. The Request is well documented. The request header is a series of CRLF-terminated lines, followed by a blank line (two CRLF in a row).
The message body can be difficult to parse, but it's deterministic in that the header tells you what encoding is used, whether it's compressed, etc. Even multi-part messages are not terribly difficult to parse.
Yes, you do need a state machine to parse HTTP messages. And yes you have to parse it byte-by-byte. It's somewhat involved, but it's very fast. Typically you read a bunch of data from the stream into a buffer and then process that buffer byte-by-byte. You don't read the stream one byte at a time because the overhead will kill performance.
You should take a look at the HttpListener source code to see how it all works. Go to http://referencesource.microsoft.com/netframework.aspx and download the .NET 4.5 Update 1 source.
Be prepared to spend a lot of time digging through that and through the HTTP spec.
By the way, it's not difficult to create a program that handles a small subset of HTTP requests. But I wonder why you'd want to do that when you can just use HttpListener and have all the details handled for you.
Update
You are talking about two different protocols. HTTP and WebSocket are two entirely different things. As the Wikipedia article says:
The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request.
With HTTP, you know that the server will send the stream and then close the connection; it's a stream of bytes with a defined end. WebSocket is a message-based protocol; it enables a stream of messages. Those messages have to be delineated in some way; the sender has to tell the receiver where the end of the message is. That can be implicit or explicit. There are several different ways this is done:
The sender includes the length of message in the first few bytes of the message. For example, the first four bytes are a binary integer that says how many bytes follow in that message. So the receiver reads the first four bytes, converts that to an integer, and then reads that many bytes.
The length of the message is implicit. For example, sender and receiver agree that all messages are 80 bytes long.
The first byte of the message is a message type, and each message type has a defined length. For example, message type 1 is 40 bytes, message type 2 is 27 bytes, etc.
Messages have some terminator. In a line-oriented message system, for example, messages are terminated by CRLF. The sender sends the text and then CRLF. The receiver reads bytes until it receives CRLF.
Whatever the case, sender and receiver must agree on how messages are structured. Otherwise the case that you're worried about does crop up: the receiver is left waiting for bytes that will never be received.
In order to handle possible communications problems you set the ReceiveTimeout property on the socket, so that a Read will throw SocketException if it takes too long to receive a complete message. That way, your program won't be left waiting indefinitely for data that is not forthcoming. But this should only happen in the case of communications problems. Any reasonable message format will include a way to determine the length of a message; either you know how much data is coming, or you know when you've reached the end of a message.
If you want to send a message you can just pre-pend the size of the message to it. Get the number of bytes in the message, pre-pend a ulong to it. At the receiver, read the size of a ulong, parse it, then read that amount of bytes from the stream and then close it.
In a HTTP header you can read: Content-Length The length of the request body in octets (8-bit bytes)

Serial Message Separation

I am trying to implement serial communications between a micro-controller and a c# windows application.
I have everything working correctly in the computer to micro-controller direction. But am having trouble working out how exactly to implement communications in the other direction.
My messages are composed of 4 bytes
B0 – Address/Name of the value being sent
B1 – High Byte
B2 – Low Byte
B3 – Checksum = Addition of bytes 0-2
To make sure complete messages are received, I have the micro-controller give up on the current message being received if there is longer than 20ms between bytes, this appears to work well and can tolerate faults in communications that may cause a loss in synchronisation.
I am not sure how I can implement this delay if at all from within a c# application, as I know you have a lot less fine control over timing.
I have seen other ASCII protocols that send a start and stop character but am not sure how to implement this when sending binary data where my values can take any value possible in the byte, and might happen to be whatever the start or stop character is.
I need to keep the micro-controller side basic as I have limited resources, and the controllers primary task requires very precise (sub us range) that converting ascii to decimal may have.
Does anybody have recommendations on how I should be implementing this from either the microprocessor or the computer side.
EDIT
I have looked at some of the other questions on here but they all seem to refer to much larger ASCII based messages.
This is a pretty bad idea. It assumes that the machine on the other end of the wire is capable of providing the same guarantee. 20 msec is no issue at all on a micro-controller. A machine that boots Linux or Windows, no way. A thread that's busy writing to the serial port can easily lose the processor for hundreds of milliseconds. A garbage collection in the case of C#.
Just don't optimize for the exceptional case, there's no point. Timeouts should be in the second range, ten times larger than the worst case you expect. Make a protocol further reliable by framing it. Always a start byte that give the receiver a chance to re-synchronize. Might as well include a length byte, you are bound to need it sooner or later. Favor a CRC over a checksum. Check out RFC916 for a recoverable protocol suggestion, albeit widely ignored. Worked well when I used it, although it needed extra work to get connection attempts reliable, you have to flush the receive buffer.
You can set the read timeout to 20ms by using the following command,
serialPort.ReadTimeout = 20;
This will make the read operation time out after 20ms, in which case you can do what you want.
Don't use ReadExisting with this timeout as it does not rely on the read timeout,
instead use Read() or readByte() and check for a Timeout Exception
incedently the same can be done with WriteTimeout even on successful writes. so take care on that.

Get and set udp packages flags and checksum with c#

I know how to send byte data to a server via the udp protocol. How can I activate some flags on the package that I am sending? How can I tel if a package that is received has some flags activated. Moreover how can I read the checksum contained by the package? If I want to send a package with no data and just a flag how can I do that? I want to do this with c#. I dont want to modify the local endpoint nor the remote endpoint nor anything else about each package.
Edit
The reason why I want to do this is because I have tried so hard to do tcp punch holing. I opened a bounty question at forward traffic from port X to computer B with c# "UDP punch hole into firewall" anyways I managed to do udp punch holing. As a result I want to make udp reliable.
In other words I want to create an algorith that will enable me to send udp packages reaching their destination in the right order. If I could set on and read some of the flags then that will facilitate this algorithm.
I know it will not be eassy to create but at least I want to give it a try. I been waisting a lot of time finding a c# library that does this...
UDP already has a checksum.
Did you mean to ask about something like RUDP? This allows you to guarantee packet delivery and ordering (both, none, or either). Lidgren implements a reliable UDP protocol.
If you want to roll your own, read up on how SEQ and ACK work in TCP; and emulate that over UDP (which is what Lidgren basically does).
In all honesty, I think it's going to be just a difficult to get the TCP side of things working as it is to tackle this question, and since TCP is designed to do what your asking, then I personally would take the "Right tool, for the right job" approach.
That said....
If you really must use UDP, then you can forget about adding anything at protocol level, the underlying transport layer will just ignore it.
Instead your going to have to design your own byte level protocol (Almost going back to the days of RS232 and serial comms...)
Your going to need your data, and that's going to have to be wrapped in some way, say a byte that defines start of packet, then maybe the data, or some flags etc...
I would look at data structures such as TLV (Used by a lot of smart cards) - TLV stands for Tag-Length-Value, so might represent something like:
0x10 0x04 0x01 0x02 0x03 0x04
Where command code 0x10 might mean update ticker, 0x04 - 4 bytes of data to follow... then 4 bytes of data.
As for flags, and check sums, well flags are easy, you can get 8 of them in a byte, and just flip them using AND masks..
0x00 AND 0x01 = 0x01
0x01 AND 0x80 = 0x81
Check sums are a little harder, but it really depends what level of accuracy you need. Essentially a check sum is some kind of mathematical algorithm that uses the contents of your data to create some magical number that can be easily computed at the receiving end.
This could be something as simple as adding all the values together then anding the result by some magic number, then adding that onto the end of your packet before sending the bytes.
Then do the same at the receiving end and compare the results.
Of course if you really don't want to go down this route of wrapping, doing protocols and all that jazz yourself, then you could learn a lesson from the bad old days of serial comms.
There are still plenty of open source implementations of things like XModem, ZModem, Kermit, YModem and many others still floating around, since most of these can (and did) work with byte streams, then it shouldn't be too difficult to make them work with a byte stream over UDP instead of a byte stream over a serial port.

How to detect when a Protocol Buffer message is fully received?

This is kind of a branch off of my other question. Read it if you like, but it's not necessary.
Basically, I realized that in order to effectively use C#'s BeginReceive() on large messages, I need to either (a) read the packet length first, then read exactly that many bytes or (b) use an end-of-packet delimiter. My question is, are either of these present in protocol buffers? I haven't used them yet, but going over the documentation it doesn't seem like there is a length header or a delimiter.
If not, what should I do? Should I just build the message then prefix/suffix it with the length header/EOP delimiter?
You need to include the size or end marker in your protocol. Nothing is built into stream based sockets (TCP/IP) other than supporting an indefinite stream of octets arbitrarily broken up into separate packets (and packets can be spilt in transit as well).
A simple approach would be for each "message" to have a fixed size header, include both a protocol version and a payload size and any other fixed data. Then the message content (payload).
Optionally a message footer (fixed size) could be added with a checksum or even a cryptographic signature (depending on your reliability/security requirements).
Knowing the payload size allows you to keep reading a number of bytes that will be enough for the rest of the message (and if a read completes with less, doing another read for the remaining bytes until the whole message has been received).
Having a end message indicator also works, but you need to define how to handle your message containing that same octet sequence...
Apologies for arriving late at the party. I am the author of protobuf-net, one of the C# implementations. For network usage, you should consider the "[De]SerializeWithLengthPrefix" methods - that way, it will automatically handle the lengths for you. There are examples in the source.
I won't go into huge detail on an old post, but if you want to know more, add a comment and I'll get back to you.
I agree with Matt that a header is better than a footer for Protocol Buffers, for the primary reason that as PB is a binary protocol it's problematic to come up with a footer that would not also be a valid message sequence. A lot of footer-based protocols (typically EOL ones) work because the message content is in a defined range (typically 0x20 - 0x7F ASCII).
A useful approach is to have your lowest level code just read buffers off of the socket and present them up to a framing layer that assembles complete messages and remembers partial ones (I present an async approach to this (using the CCR) here, albeit for a line protocol).
For consistency, you could always define your message as a PB message with three fields: a fixed-int as the length, an enum as the type, and a byte sequence that contains the actual data. This keeps your entire network protocol transparent.
TCP/IP, as well as UDP, packets include some reference to their size. The IP header contains a 16-bit field that specifies the length of the IP header and data in bytes. The TCP header contains a 4-bit field that specifies the size of the TCP header in 32-bit words. The UDP header contains a 16-bit field that specifies the length of the UDP header and data in bytes.
Here's the thing.
Using the standard run-of-the-mill sockets in Windows, whether you're using the System.Net.Sockets namespace in C# or the native Winsock stuff in Win32, you never see the IP/TCP/UDP headers. These headers are stripped off so that what you get when you read the socket is the actual payload, i.e., the data that was sent.
The typical pattern from everything I've ever seen and done using sockets is that you define an application-level header that precedes the data you want to send. At a minimum, this header should include the size of the data to follow. This will allow you to read each "message" in its entirety without having to guess as to its size. You can get as fancy as you want with it, e.g., sync patterns, CRCs, version, type of message, etc., but the size of the "message" is all you really need.
And for what it's worth, I would suggest using a header instead of an end-of-packet delimiter. I'm not sure if there is a signficant disadvantage to the EOP delimiter, but the header is the approach used by most IP protocols I've seen. In addition, it just seems more intuitive to me to process a message from the beginning rather than wait for some pattern to appear in my stream to indicate that my message is complete.
EDIT: I have only just become aware of the Google Protocol Buffers project. From what I can tell, it is a binary serialization/de-serialization scheme for WCF (I'm sure that's a gross oversimplification). If you are using WCF, you don't have to worry about the size of the messages being sent because the WCF plumbing takes care of this behind the scenes, which is probably why you haven't found anything related to message length in the Protocol Buffers documentation. However, in the case of sockets, knowing the size will help out tremendously as discussed above. My guess is that you will serialize your data using the Protocol Buffers and then tack on whatever application header you come up with before sending it. On the receive side, you'll pull off the header and then de-serialize the remainder of the message.

Categories

Resources