I would like to read some data from a serial port. However, the packets of data are not well defined - there is no fixed length or end character. What I do to solve this is I wait until the serial port stops sending data for a short time, say 50ms, and then I say one transmission is complete. In Python, I can use the inter_byte_timeout feature of pyserial for this. I am confident the device will send all bytes in one go, so this works.
How do I do it in C#? I have been using System.IO.Ports.SerialPort, but I am open to other solutions.
What doesn't work is the normal ReadTimeout property. That only waits until the first byte. Read will then return an arbitrary amount of data. I also can't use ReceivedBytesThreshold, since I don't know how many bytes it will be in total.
Related
I am attempting to replace a C library used for communications to a device over RS-485. My code works well on some serial devices and all USB-to-RS485 and IP-to-RS485 devices that I have tried. However, on other serial devices, it fails to correctly receive packets that are longer than about 28 bytes. After about that many bytes, it receives some garbage characters and times out (as it never receives enough characters).
On the same hardware, I can receive data through the C library without issue. (So, the hardware is clearly capable).
Initially, I was calling SerialPort.BaseStream to get a stream and then Stream.ReadByte on the stream to receive the data. I found an offhand reference in the documentation indicating that BaseStream is not buffered. So, I switched to calling SerialPort.ReadByte - which made no difference.
In an attempt to be more efficient, I switched the code to call SerialPort.Read for larger chunks of data. This (counter-intuitively) made things worse - failing after the 20th character.
As SerialPort.ReadBufferSize defaults to 4096 bytes (and won't even accept anything smaller) I am confused as to how the packets can overrun at 20-ish bytes.
How does one go about receiving data on a serial port such that the read buffer comes into play?
I've read around and i'm hearing mixed things about this. Do you have to split a file into chunks to send it over a stream? or does the OS do that for you?
I have a byte array of about 320,000 values, which i need to get across a network. I can get the first several thousand over but anything after that, it's just set to 0.
I'm using the NetworkStream class, creating a TcpListener / TcpClient, getting the stream from the listener once connected and writing the array to the stream then flushing. Without Success.
Any help would be appreciated.
Cheers,
When using TCP sockets, sending 1024 bytes may or may not be split into chunks by the OS. This behavior at our level should be considered undefined and the receiver should be able to handle a situation like that. What most protocols do is specify a certain (known) message size that contains information such as file size, what range of data it should read, etc. Each message the server constructs will have this header. You as the programmer can specify your chunk sizes, and each chunk must be reconstructed at the receiver level.
Here's a walkthrough:
Server sends a command to the client with information about the file, such as total size, file name, etc.
Client knows how big the command is based on a pre-programmed agreement of header size. If the command is 512 bytes, then the client will keep receiving data until it has filled a 512 byte buffer. Behind the scenes, the operating system could have picked that data up in multiple chunks, but that shouldn't be a worry for you. After all, you only care about reading exactly 512 bytes.
Server begins sending more commands, streaming a file to the client chunk by chunk (512 bytes at a time).
The client receives these chunks and constructs the file over the course of the connection.
Since the client knows how big the file is, it no longer reads on that socket.
The server terminates the connection.
That example is pretty basic, but it's a good groundwork on how communication works.
Recently I started working with sockets. I realized that when reading from a network stream, you can not know how much data is coming in. So either you know in advance how many bytes have to be recieved or you know which bytes.
Since I am currently trying to implement a C# WebSocket server I need to process HTTP requests. A HTTP request can have arbitrary length, so knowing in advance how many bytes is out of the question. But a HTTP request always has a certain format. It starts with the request-line, followed by zero or more headers, etc. So with all this information it should be simple, right?
Nope.
One approach I came up with was reading all data until a specific sequence of bytes was recognized. The StreamReader class has the ReadLine method which, I believe, works like this. For HTTP a reasonable delimiter would be the empty line separating the message head from the body.
The obvious problem here is the requirement of a (preferrably short) termination sequence, like a line break. Even the HTTP specification suggests that these two adjacent CRLFs are not a good choice, since they could also occur at the beginning of the message. And after all, two CRLFs are not a simple delimiter anyways.
So expanding the method to arbitrary type-3 grammars, I concluded the best choice for parsing the data is a finite state machine. I can feed the data to the machine byte after byte, just as I am reading it from the network stream. And as soon as the machine accepts the input I can stop reading data. Also, the FSM could immediately capture the significant tokens.
But is this really the best solution? Reading byte after byte and validating it with a custom parser seems tedious and expensive. And the FSM would be either slow or quite ugly. So...
How do you process data from a network stream when the form is known but not the size?
How can classes like the HttpListener parse the messages and be fast at it too?
Did I miss something here? How would this usually be done?
HttpListener and other such components can parse the messages because the format is deterministic. The Request is well documented. The request header is a series of CRLF-terminated lines, followed by a blank line (two CRLF in a row).
The message body can be difficult to parse, but it's deterministic in that the header tells you what encoding is used, whether it's compressed, etc. Even multi-part messages are not terribly difficult to parse.
Yes, you do need a state machine to parse HTTP messages. And yes you have to parse it byte-by-byte. It's somewhat involved, but it's very fast. Typically you read a bunch of data from the stream into a buffer and then process that buffer byte-by-byte. You don't read the stream one byte at a time because the overhead will kill performance.
You should take a look at the HttpListener source code to see how it all works. Go to http://referencesource.microsoft.com/netframework.aspx and download the .NET 4.5 Update 1 source.
Be prepared to spend a lot of time digging through that and through the HTTP spec.
By the way, it's not difficult to create a program that handles a small subset of HTTP requests. But I wonder why you'd want to do that when you can just use HttpListener and have all the details handled for you.
Update
You are talking about two different protocols. HTTP and WebSocket are two entirely different things. As the Wikipedia article says:
The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request.
With HTTP, you know that the server will send the stream and then close the connection; it's a stream of bytes with a defined end. WebSocket is a message-based protocol; it enables a stream of messages. Those messages have to be delineated in some way; the sender has to tell the receiver where the end of the message is. That can be implicit or explicit. There are several different ways this is done:
The sender includes the length of message in the first few bytes of the message. For example, the first four bytes are a binary integer that says how many bytes follow in that message. So the receiver reads the first four bytes, converts that to an integer, and then reads that many bytes.
The length of the message is implicit. For example, sender and receiver agree that all messages are 80 bytes long.
The first byte of the message is a message type, and each message type has a defined length. For example, message type 1 is 40 bytes, message type 2 is 27 bytes, etc.
Messages have some terminator. In a line-oriented message system, for example, messages are terminated by CRLF. The sender sends the text and then CRLF. The receiver reads bytes until it receives CRLF.
Whatever the case, sender and receiver must agree on how messages are structured. Otherwise the case that you're worried about does crop up: the receiver is left waiting for bytes that will never be received.
In order to handle possible communications problems you set the ReceiveTimeout property on the socket, so that a Read will throw SocketException if it takes too long to receive a complete message. That way, your program won't be left waiting indefinitely for data that is not forthcoming. But this should only happen in the case of communications problems. Any reasonable message format will include a way to determine the length of a message; either you know how much data is coming, or you know when you've reached the end of a message.
If you want to send a message you can just pre-pend the size of the message to it. Get the number of bytes in the message, pre-pend a ulong to it. At the receiver, read the size of a ulong, parse it, then read that amount of bytes from the stream and then close it.
In a HTTP header you can read: Content-Length The length of the request body in octets (8-bit bytes)
Please tell me how to read data from com port in c #, if the data is received in bytes, but of variable length, that is, the answer may be a byte array 20 and 50, that is the main question is, how do you know that the device stopped responding?
The most important part is defining the protocol bits used. You should have both start and stop bits that will tell your SerialPort object when to stop reading. Usually you don't have to care after this since your callback function will contain the data in an array.
http://msmvps.com/blogs/coad/archive/2005/03/23/SerialPort-_2800_RS_2D00_232-Serial-COM-Port_2900_-in-C_2300_-.NET.aspx
You don't. COM ports are a bit like TCP - they're a streaming service - they only transfer 7 or 8 bits at a time, (depending on how you set the port up, normally 8 bits).
If you want to send anything more complex than a byte, you need to build a protocol on top. If your data is text, a CR or null at the end will often do. If it's values in the whole set of bytes 0-255, then you need a more complex protocol to ensure that the framing of the data-units is received correctly. Maybee your requirements can be met by a simple timeout, eg 'if no chars received for 500ms, that's the end of the data unit', but such timeout-protocols are obviously low performance and subject to failures:(
Rgds,
Martin
I'm using the UdpClient class to send packets.
It seems that there's a per-packet size limit, since big packets never reach their destination. I tried to lower the packet size, which allows the packets to reach their destination. I read somewhere that the "standard" packet size limit is 512 bytes.
But I still need to send objects that are way larger than 512 bytes.
So my question is: is there a built-in way in .NET to split up a byte array into smaller packets. Obviously, I need to reassemble the split packets afterwards, too.
I saw the SendFile method in the Socket class, which I guess should be able to automatically split up big files. But the method doesn't allow byte array input (only file name). So it would only work for sending data that is stored on the hard drive, not for in-memory data.
The Send function in the Socket class takes a byte array as a parameter.
http://msdn.microsoft.com/en-us/library/w93yy28a.aspx
You can try this instead.
Sending a large block of data by UDP seems a little odd, because with UDP the datagrams are not guaranteed to arrive at the other side. And even if they all do arrive they're not guaranteed to be in the original order. Are you sure you want to use UDP?
Ciaran Keating was right. TCP was a better choice for my need.