NetworkStream doesn't flush data - c#

I'm writing a simple chat program using sockets. When I'm sending a long message, flush the stream and a short message afterwards, the end of the long message gets appended to the short message. It looks like this:
Send "aaasdsd"
Recieve "aaasdsd"
Send "bb"
Recieve "bbasdsd"
Through debugging I've found that the Flush method, that's supposed to clear all data from the stream, does not do that. According to mdsn, it is the expected behaviour, because NetworkStream is not bufferized. How do I clear the stream in that case? I could just follow every message with an empty (consisting of \0 chars) one of the same length, but I don't think it's correct to do that, also, it would screw up some features I need.

TCP doesn't work this way. It's as simple as that.
TCP is a stream-based protocol. That means that you shouldn't ever treat it as a message-based protocol (unlike, say, UDP). If you need to send messages over TCP, you have to add your own messaging protocol on top of TCP.
What you're trying to do here is send two separate messages, and receive two separate messages on the other side. This would work fine on UDP (which is message-based), but it will not work on TCP, because TCP is a stream with no organisation.
So yeah, Flush works just fine. It's just that no matter how many times you call Flush on one side, and how many times you call individual Sends, each Receive on the other end will get as much data as can fit in its buffer, with no respect to the Sends on the other side.
The solution you've devised (almost - just separate the strings with a single \0) is actually one of the proper ways to handle this. By doing that, you're working with messages on top of the stream again. This is called message framing - it allows you to tell individual messages apart. In your case, you've added delimiters between the messages. Think about writing the same data in a file - again, you'll need some way of your own to separate the individual messages (for example, using end lines).
Another way to handle message framing is using a length prefix - before you send the string itself, send it's length. Then, when you read on the other side, you know that between the strings, there should always be a length prefix, so the reader knows when the message ends.
Yet another way isn't probably very useful for your case - you can work with fixed-length data. So a message will always be exactly 100 bytes, for example. This is very powerful when combined with pre-defined message types - so message type 1 would contain exactly two integers, representing some coördinates, for example.
In either case, though, you'll need your own buffering on the receiving end. This is because (as you've already seen) a single receive can read multiple messages at once, and at the same time, it's not guaranteed to read the whole message in a single read. Writing your own networking is actually pretty tricky - unless you're doing this to actually learn network programming, I'd recommend using some ready technology - for example, Lindgren (a nice networking library, optimized for games but works fine for general networking as well) or WCF. For a chat system, simple HTTP (especially with the bi-directional WebSockets) might be just fine as well.
EDIT:
As Damien correctly noted, there seems to be another problem with your code - you seem to be ignoring the return value of Read. The return value tells you the amount of bytes you've actually read. Since you have a fixed-size persistent buffer on the receiving side (apparently), it means that every byte after the amount you've just read will still contain the old data. To fix this, just make sure you're only working with as much bytes as Read returned. Also, since this seems to indicate you're ignoring the Read return value altogether, make sure to properly handle the case when Read returns 0 - that means the other side has gracefully shutdown its connection - and the receiving side should do the same.

Related

ProtoBuf ParseDelimitedFrom is misaligning to the NetworkStream?

What I am Trying to do
I am the author of this project which grabs the currently playing song from Spotify's local instance via their local API and sets Discord's 'Now Playing' message to reflect it.
Pretty simple stuff, but for various reasons I want to switch to C#, and that doesn't have the same level of support for Spotify.
To cut a long story shorter, Cross-Platform + Working Useful API + Spotify Playlist = Clementine. So I decided to create a similar Discord Integration but for Clementine.
But there's a problem
I can create a socket that connects to 127.0.0.1:5500.
I can send a ConnectRequest message successfully through this socket.
I can receive these message types with no problems whatsoever:
KEEP_ALIVE,
PLAY,
PAUSE,
STOP,
REPEAT,
SHUFFLE,
VOLUME_UPDATE,
TRACK_POSITION_UPDATE
But if I try to Play a song from a Stopped State, 20 exceptions are thrown and then the "PLAY" message is parsed.
I believe this is what should be the CURRENT_METAINFO message.
Similar exceptions are thrown if I try to add a new song to the playlist.
The mechanism I am using to retrieve messages is
Message.Parser.ParseDelimitedFrom(Client.GetStream());
Where:
Message = Class defined in .proto file from Repo
Parser = Protobuf built-in Object Parser
ParseDelimitedFrom = Protobuf built-in method which takes a [Length:PayloadOfLength'Length'] message from a Stream and parses the Payload as an object of the provided type (Message).
Client = System.Net.Sockets.TcpClient
GetStream = System.Net.Sockets.NetworkStream
Aside from detecting roughly 3 'Unknown' messages for every keepalive (The NetworkStream seems to send 0's every quarter of a second without data?) when Clementine is Idle, this method works just fine for Messages that are not too complex (IE: The ones without a SongMetadata class in the Response Object, and the ones without a Response Object at all).
My Suspicions
Due to Proto3 not being backwards-compatible with Proto2, I had to modify the provided .proto file slightly to remove all the 'optional' keywords, and the default values for things, and re-set all the enums to start from 0.
This may have introduced a subtle bug in the Parser which tries to read a value that doesn't exist, and moves on to the next instead of realising there's a default, or something like that.
Some of the exceptions seem to indicate that the parser is not handling the length of the data properly.
This could mean that the Parser is reading the Stream to the end, before the message is written completely, and is not waiting for the rest before stopping and trying to parse it. Then not being able to read the first part of the incomplete message, but removing it from the Stream anyway, rendering all the following chunks also illegible.
It could also mean that due to the way Clementine nests messages, the parser is detecting the size of the outer message and not catering for the optional nested object properly (or the actual Clementine message is not setting the length appropriately).
It could also be a problem with the album art being a sequence of bytes of indeterminate length.
The walls I have hit
I have completely re-written the Socket logic at least 3 times. The first two with bare Socket's and using byte[] buffers to read messages.
This seemed promising but would never actually return from the recursive function that was intended to retrieve the remainder of the bytes for a particular message, as there were always more bytes to read. It didn't occur to me until writing this to try to build the buffer from the front instead and tryParse it each iteration until it understood the message, then remove the message from the buffer.
I have attempted to implement a byte[] buffer with the TcpClient in a similar manner to the above with my current implementation. But again, trouble with the Socket writing 0's instead of remaining idle cause a problem with leading 0's being treated as an invalid tag and throwing exceptions.
I have attempted to use a BufferedStream to wrap the NetworkStream, but was unsure of exactly how to go about implementing a re-read of previously read, exception-causing data, once more data arrived.
Saucy
The actual code as a whole. Should be drop-in ready with any VSCode installation with the C# plugin and dotnet core properly installed and configured.
https://bitbucket.org/roberestarkk/discordclementineremotedotnet
A cry for Help
I would be immensely grateful for literally ANY assistance anyone can offer! I am at my wit's end with this project.
I've tried everything short of messaging the developers (which I intend to do shortly anwyays, I just need to install an IRC client), and am no-closer to getting to the bottom of things.
If I only knew WHY those exceptions were being thrown...
The solution to this problem was to roll my own message buffering to cut out the first 4 bytes (int) into a byte array, reverse it and parse it into an int. If the int is non-zero (Clementine sends an int representing 0 every [interval] when it's not sending a message), I can then grab the actual message (buffer[3..length]), and feed it into the non delimited version of protobuf's message parser.
Working code uploaded to the repo.

How do you receive packets through TCP sockets without knowing their packet size before receiving?

I am working on a network application that can send live video feed asynchronously from one application to another, sort of like Skype. The main issue I am having is that I want to be able to send the frames but not have to know their size each time before receiving.
The way AForge.NET works when handling images is that the size of the current frame will most likely be different than the one before it. The size is not static so I was just wondering if there was a way to achieve this. And, I already tried sending the length first and then the frame, but that is not what I was looking for.
First, make sure you understand that TCP itself has no concept of "packet" at all, not at the user code level. If one is conceptualizing one's TCP network I/O in terms of packets, they are probably getting it wrong.
Now that said, you can impose a packet structure on the TCP stream of bytes. To do that where the packets are not always the same size, you can only transmit the length before the data, or delimit the data in some way, such as wrapping it in a self-describing encoding, or terminating the data in some way.
Note that adding structure around the data (encoding, terminating, whatever) when you're dealing with binary data is fraught with hassles, because binary data usually is required to support any combination of bytes. This introduces a need for escaping the data or otherwise being able to flag something that would normally look like a delimiter or terminator, so that it can be treated as binary data instead of some boundary of the data.
Personally, I'd just write a length before the data. It's a simple and commonly used technique. If you still don't want to do it that way, you should be specific and explain why you don't, so that your specific scenario can be better understood.

Parsing data from a network stream?

Recently I started working with sockets. I realized that when reading from a network stream, you can not know how much data is coming in. So either you know in advance how many bytes have to be recieved or you know which bytes.
Since I am currently trying to implement a C# WebSocket server I need to process HTTP requests. A HTTP request can have arbitrary length, so knowing in advance how many bytes is out of the question. But a HTTP request always has a certain format. It starts with the request-line, followed by zero or more headers, etc. So with all this information it should be simple, right?
Nope.
One approach I came up with was reading all data until a specific sequence of bytes was recognized. The StreamReader class has the ReadLine method which, I believe, works like this. For HTTP a reasonable delimiter would be the empty line separating the message head from the body.
The obvious problem here is the requirement of a (preferrably short) termination sequence, like a line break. Even the HTTP specification suggests that these two adjacent CRLFs are not a good choice, since they could also occur at the beginning of the message. And after all, two CRLFs are not a simple delimiter anyways.
So expanding the method to arbitrary type-3 grammars, I concluded the best choice for parsing the data is a finite state machine. I can feed the data to the machine byte after byte, just as I am reading it from the network stream. And as soon as the machine accepts the input I can stop reading data. Also, the FSM could immediately capture the significant tokens.
But is this really the best solution? Reading byte after byte and validating it with a custom parser seems tedious and expensive. And the FSM would be either slow or quite ugly. So...
How do you process data from a network stream when the form is known but not the size?
How can classes like the HttpListener parse the messages and be fast at it too?
Did I miss something here? How would this usually be done?
HttpListener and other such components can parse the messages because the format is deterministic. The Request is well documented. The request header is a series of CRLF-terminated lines, followed by a blank line (two CRLF in a row).
The message body can be difficult to parse, but it's deterministic in that the header tells you what encoding is used, whether it's compressed, etc. Even multi-part messages are not terribly difficult to parse.
Yes, you do need a state machine to parse HTTP messages. And yes you have to parse it byte-by-byte. It's somewhat involved, but it's very fast. Typically you read a bunch of data from the stream into a buffer and then process that buffer byte-by-byte. You don't read the stream one byte at a time because the overhead will kill performance.
You should take a look at the HttpListener source code to see how it all works. Go to http://referencesource.microsoft.com/netframework.aspx and download the .NET 4.5 Update 1 source.
Be prepared to spend a lot of time digging through that and through the HTTP spec.
By the way, it's not difficult to create a program that handles a small subset of HTTP requests. But I wonder why you'd want to do that when you can just use HttpListener and have all the details handled for you.
Update
You are talking about two different protocols. HTTP and WebSocket are two entirely different things. As the Wikipedia article says:
The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request.
With HTTP, you know that the server will send the stream and then close the connection; it's a stream of bytes with a defined end. WebSocket is a message-based protocol; it enables a stream of messages. Those messages have to be delineated in some way; the sender has to tell the receiver where the end of the message is. That can be implicit or explicit. There are several different ways this is done:
The sender includes the length of message in the first few bytes of the message. For example, the first four bytes are a binary integer that says how many bytes follow in that message. So the receiver reads the first four bytes, converts that to an integer, and then reads that many bytes.
The length of the message is implicit. For example, sender and receiver agree that all messages are 80 bytes long.
The first byte of the message is a message type, and each message type has a defined length. For example, message type 1 is 40 bytes, message type 2 is 27 bytes, etc.
Messages have some terminator. In a line-oriented message system, for example, messages are terminated by CRLF. The sender sends the text and then CRLF. The receiver reads bytes until it receives CRLF.
Whatever the case, sender and receiver must agree on how messages are structured. Otherwise the case that you're worried about does crop up: the receiver is left waiting for bytes that will never be received.
In order to handle possible communications problems you set the ReceiveTimeout property on the socket, so that a Read will throw SocketException if it takes too long to receive a complete message. That way, your program won't be left waiting indefinitely for data that is not forthcoming. But this should only happen in the case of communications problems. Any reasonable message format will include a way to determine the length of a message; either you know how much data is coming, or you know when you've reached the end of a message.
If you want to send a message you can just pre-pend the size of the message to it. Get the number of bytes in the message, pre-pend a ulong to it. At the receiver, read the size of a ulong, parse it, then read that amount of bytes from the stream and then close it.
In a HTTP header you can read: Content-Length The length of the request body in octets (8-bit bytes)

Does BeginReceive() get everything sent by BeginSend()?

I'm writing a program that will have both a server side and a client side, and the client side will connect to a server hosted by the same program (but by another instance of it, and usually on another machine). So basically, I have control over both aspects of the protocol.
I am using BeginReceive() and BeginSend() on both sides to send and receive data. My question is if these two statements are true:
Using a call to BeginReceive() will give me the entire data that was sent by a single call to BeginSend() on the other end when the callback function is called.
Using a call to BeginSend() will send the entire data I pass it to the other end, and it will all be received by a single call to BeginReceive() on the other end.
The two are basically the same in fact.
If the answer is no, which I'm guessing is the case based on what I've read about sockets, what is the best way to handle commands? I'm writing a game that will have commands such as PUT X Y. I was thinking of appending a special character (# for example) to the end of each command, and each time I receive data, I append it to a buffer, then parse it only after I encounter a #.
No, you can't expect BeginReceive to necessarily receive all of the data from one call to BeginSend. You can send a lot of data in one call to BeginSend, which could very well be split across several packets. You may receive each packet's data in a separate receive call.
The two main ways of splitting a stream into multiple chunks are:
Use a delimiter (as per your current suggestion). This has the benefit of not needing to know the size beforehand, but has the disadvantage of being relatively hard to parse, and potentially introducing requirements such as escaping the delimiter.
Prepend the size of each message before the message. The receiver can read the length first, and then know exactly how much data to expect.

How should I handle incomplete packet buffers?

I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible).
Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets?
For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing.
Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.
Currently I'm debating between List<Byte> and StringBuilder. Any comparison of the two for this situation would be very helpful.
Read the data from the socket into a buffer. When you get the terminating character, turn it into a message and send it on its way to the rest of your code.
Also, remember that TCP is a stream, not a packet. So you should never assume that you will get everything sent at one time in a single read.
As far as buffers go, you should probably only need one per connection at most. I'd probably start with the max size that you reasonably expect to receive, and if that fills, create a new buffer of a larger size - a typical strategy is to double the size when you run out to avoid churning through too many allocations.
If you have multiple incoming connections, you may want to do something like create a pool of buffers, and just return "big" ones to the pool when done with them.
You could just use a List<byte> as your buffer, so the .NET framework takes care of automatically expanding it as needed. When you find a null terminator you can use List.RemoveRange() to remove that message from the buffer and pass it to the next layer up.
You'd probably want to add a check and throw an exception if it exceeds a certain length, rather than just wait until the client runs out of memory.
(This is very similar to Ben S's answer, but I think a byte array is a bit more robust than a StringBuilder in the face of encoding issues. Decoding bytes to a string is best done higher up, once you have a complete message.)
I would just use a StringBuilder and read in one character at a time, copying and emptying the builder whenever I hit a null terminator.
I wrote this answer regarding Java sockets but the concept is the same.
What's the best way to monitor a socket for new data and then process that data?

Categories

Resources