TCP Framing with Binary Protocol - c#

Hey, I'm having an issue seperating packets using a custom binary protocol.
Currently the server side code looks like this.
public void HandleConnection(object state)
{
TcpClient client = threadListener.AcceptTcpClient();
NetworkStream stream = client.GetStream();
byte[] data = new byte[4096];
while (true)
{
int recvCount = stream.Read(data, 0, data.Length);
if (recvCount == 0) break;
LogManager.Debug(Utility.ToHexDump(data, 0, recvCount));
//processPacket(new MemoryStream(data, 0, recvCount));
}
LogManager.Debug("Client disconnected");
client.Close();
Dispose();
}
I've been watching the hex dumps of the packets, and sometimes the entire packet comes in one shot, let's say all 20 bytes. Other times it comes in fragmented, how do I need to buffer this data to be able to pass it to my processPacket() method correctly. I'm attempting to use a single byte opcode header only, should I add something like a (ushort)contentLength to the header aswell? I'm trying to make the protocol as lightweight as possible, and this system won't be sending very large packets(< 128 bytes).
The client side code I'm testing with is, as follows.
public void auth(string user, string password)
{
using (TcpClient client = new TcpClient())
{
client.Connect(IPAddress.Parse("127.0.0.1"), 9032);
NetworkStream networkStream = client.GetStream();
using (BinaryWriter writer = new BinaryWriter(networkStream))
{
writer.Write((byte)0); //opcode
writer.Write(user.ToUpper());
writer.Write(password.ToUpper());
writer.Write(SanitizationMgr.Verify()); //App hash
writer.Write(Program.Seed);
}
}
}
I'm not sure if that could be what's messing it up, and binary protocol doesn't seem to have much info on the web, especially where C# is involved. Any comment's would be helpful. =)
Solved with this, not sure if it's correct, but it seems to give my handlers just what they need.
public void HandleConnection(object state)
{
TcpClient client = threadListener.AcceptTcpClient();
NetworkStream stream = client.GetStream();
byte[] data = new byte[1024];
uint contentLength = 0;
var packet = new MemoryStream();
while (true)
{
int recvCount = stream.Read(data, 0, data.Length);
if (recvCount == 0) break;
if (contentLength == 0 && recvCount < headerSize)
{
LogManager.Error("Got incomplete header!");
Dispose();
}
if(contentLength == 0) //Get the payload length
contentLength = BitConverter.ToUInt16(data, 1);
packet.Write(data, (int) packet.Position, recvCount); //Buffer the data we got into our MemStream
if (packet.Length < contentLength + headerSize) //if it's not enough, continue trying to read
continue;
//We have a full packet, pass it on
//LogManager.Debug(Utility.ToHexDump(packet));
processPacket(packet);
//reset for next packet
contentLength = 0;
packet = new MemoryStream();
}
LogManager.Debug("Client disconnected");
client.Close();
Dispose();
}

You should just treat it as a stream. Don't rely on any particular chunking behaviour.
Is the amount of data you need always the same? If not, you should change the protocol (if you can) to prefix the logical "chunk" of data with the length in bytes.
In this case you're using BinaryWriter on one side, so attaching a BinaryReader to the NetworkStream returned by TcpClient.GetStream() would seem like the easiest approach. If you really want to capture all the data for a chunk at a time though, you should go back to my idea of prefixing the data with its length. Then just loop round until you've got all the data.
(Make sure you've got enough data to read the length though! If your length prefix is 4 bytes, you don't want to read 2 bytes and miss the next 2...)

Related

System.OutOfMemoryException on server side for client files

I am getting data from client and saving it to the local drive on local host .I have checked it for a file of 221MB but a test for file of 1Gb gives the following exception:
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
Following is the code at server side where exception stems out.
UPDATED
Server:
public void Thread()
{
TcpListener tcpListener = new TcpListener(ipaddr, port);
tcpListener.Start();
MessageBox.Show("Listening on port" + port);
TcpClient client=new TcpClient();
int bufferSize = 1024;
NetworkStream netStream;
int bytesRead = 0;
int allBytesRead = 0;
// Start listening
tcpListener.Start();
// Accept client
client = tcpListener.AcceptTcpClient();
netStream = client.GetStream();
// Read length of incoming data to reserver buffer for it
byte[] length = new byte[4];
bytesRead = netStream.Read(length, 0, 4);
int dataLength = BitConverter.ToInt32(length,0);
// Read the data
int bytesLeft = dataLength;
byte[] data = new byte[dataLength];
while (bytesLeft > 0)
{
int nextPacketSize = (bytesLeft > bufferSize) ? bufferSize : bytesLeft;
bytesRead = netStream.Read(data, allBytesRead, nextPacketSize);
allBytesRead += bytesRead;
bytesLeft -= bytesRead;
}
// Save to desktop
File.WriteAllBytes(#"D:\LALA\Miscellaneous\" + shortFileName, data);
// Clean up
netStream.Close();
client.Close();
}
I am getting the file size first from client side followed by data.
1).Should i increase the buffer size or any other technique ?
2). File.WriteAllBytes() and File.ReadAllBytes() seems blocking and freezes the PC.Is there any async method for it to help provide the progress of file recieved at server side.
You don't need to read the whole thing to memory before writing it to disc. Just copy straight from the network stream to a FileStream:
byte[] length = new byte[4];
// TODO: Validate that bytesRead is 4 after this... it's unlikely but *possible*
// that you might not read the whole length in one go.
bytesRead = netStream.Read(length, 0, 4);
int bytesLeft = BitConverter.ToInt32(length,0);
using (var output = File.Create(#"D:\Javed\Miscellaneous\" + shortFileName))
{
netStream.CopyTo(output, bytesLeft);
}
Note that instead of calling netStream.Close() explicitly, you should use a using statement:
using (Stream netStream = ...)
{
// Read from it
}
That way the stream will be closed even if an exception is thrown.
The CLR has a per-object limit a bit short of 2GB. However that's the theory, in practice how much memory you can allocate depends on how much memory the framework allows you to allocate. I wouldn't expect it to allow you to allocate 1 GB data table. You should allocate smaller table, and write the data in chunks into disk file.
The "out of memory" exception happens because you are trying to place the entire file into memory before dumping it on disk. This is suboptimal, because you don't need the entire file in memory in order to write into the file: you can read it block-by-block in reasonably-sized increments, and write it out as you go.
Starting with .NET 4.0 you can use Stream.CopyTo method to accomplish this in a few lines of code:
// Read and ignore the initial four bytes of length from the stream
byte[] ignore = new byte[4];
int bytesRead = 0;
do {
// This should complete in a single call, but the API requires you
// to do it in a loop.
bytesRead += netStream.Read(ignore, bytesRead, 4-bytesRead);
} while (bytesRead != 4);
// Copy the rest of the stream to a file
using (var fs = new FileStream(#"D:\Javed\Miscellaneous\" + shortFileName, FileMode.Create)) {
netStream.CopyTo(fs);
}
netStream.Close();
Starting with .NET 4.5 you can use CopyToAsync, too, which would give you a way to do reading and writing asynchronously.
Note the code that drops the initial four bytes from the stream. This is done to avoid writing the length of the stream along with the "payload" bytes. If you have control over the network protocol, you could change the sending side to stop prefixing the stream with its length, and remove the code that reads and ignores it on the receiving side.

Best way to indicate that all data has been received over the network?

I've got a client / server application that works pretty well, but it's missing one crucial piece of behavior to make it a bit more solid.
Right now, it's far from "strong" in terms of network capabilities. I'm trying to get it there, and research has lead me to believe that I need some sort of protocol in place to ensure that no data is lost during network transmissions.
I've heard of a few methods. One that I think will work best for our situations is to use a terminator, something like an <EOF> tag. My issue is that I'm not sure of the best way to implement this.
Here's a couple code snippets that I'll be modifying to include a terminator after figuring out the best solution.
Client:
TcpClient client = new TcpClient();
client.Connect(hostname, portNo);
using (var stream = client.GetStream())
{
//send request
stream.Write(data, 0, data.Length);
stream.Flush();
//read server response
if (stream.CanRead)
{
byte[] buffer = new byte[1024];
string response = "";
int bytesRead = 0;
do
{
bytesRead = stream.Read(buffer, 0, buffer.Length);
response += Encoding.ASCII.GetString(buffer, 0, bytesRead);
} //trying to replace 'DataAvailable', it doesn't work well
while (stream.DataAvailable);
}
}
Note that I'm trying to replace the stream.DataAvailable method of checking for more data in the stream. It's been causing problems.
Server:
var listener = new TcpListener(IPAddress.Any, portNo);
listener.Start();
var client = listener.AcceptTcpClient();
using (var stream = client.GetStream())
{
var ms = new System.IO.MemoryStream();
byte[] buffer = new byte[4096];
int bytesRead = 0;
do
{
bytesRead = stream.Read(buffer, 0, buffer.Length);
ms.Write(buffer, 0, bytesRead);
} //also trying to replace this 'stream.DataAvailable'
while (stream.DataAvailable);
ms.Position = 0;
string requestString = Encoding.UTF8.GetString(ms.ToArray());
ms.Position = 0;
/*
process request and create 'response'
*/
byte[] responseBytes = Encoding.UTF8.GetBytes(response);
stream.Write(responseBytes, 0, responseBytes.Length);
stream.Flush();
}
So, given these two code examples, how can I modify these to both include and check for some sort of data terminator that indicates it's safe to stop reading data?
You can rely on TCP transmitting all the data before the FIN. The problem with your code is that available() is not a valid test for end of stream. It is a test for data being available now.
So you are terminating your reading loop prematurely, and thus missing data at the receiver, and getting resets at the sender.
You should just block in the Read() method until you receive the EOS indication from the API, whatever that is in C#.
You don't need your own additional EOS indicator.
We ended up using an EOS (end of stream) indicator on both ends of our project. I won't post the full code example, but here's a small snippet of how it works:
stream.Write(data, 0, data.Length);
stream.WriteByte(Convert.ToByte(ConsoleKey.Escape));
stream.Flush();
On the receiving end of this stream, the loop reads data byte-by-byte. Once it receives the ConsoleKey.Escape byte, it terminates the loop. It works!

Sending data over TCP

I have a client server situation, where the client sends the data (a movie for example) to the server, the server saves that data to the HDD.
It sends the data by a fixed array of bytes. After the bytes are sent, the server asks if there is more, if yes, send more and so on. Every thing is going well, all the data gets across.
But when I try to play the movie, it cant be played and if I look to the file length of each movie (client and server) the server movie is bigger then the client movie.also when I look at the command screen at the end of the sending/receiving data there is more then a 100% of the bytes that are across.
The only thing I can think of that can be wrong is the fact that my server reads in the stream till the fixed buffer array is full and therefor has at the end more bytes then the client. However if that is the problem how can I solve this?
I've just added the 2methods of sending, because the tcp connection works, any help is welcome.
Client
public void SendData(NetworkStream nws, StreamReader sr, StreamWriter sw)
{
using (FileStream reader = new FileStream(this.path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[1024];
int currentBlockSize = 0;
while ((currentBlockSize = reader.Read(buffer, 0, buffer.Length)) > 0)
{
sw.WriteLine(true.ToString());
sw.Flush();
string wait = sr.ReadLine();
nws.Write(buffer, 0, buffer.Length);
nws.Flush();
label1.Text = sr.ReadLine();
}
sw.WriteLine(false.ToString());
sw.Flush();
}
}
Server
private void GetMovieData(NetworkStream nws, StreamReader sr, StreamWriter sw, Film filmInfo)
{
Console.WriteLine("Adding Movie: {0}", filmInfo.Titel);
double persentage = 0;
string thePath = this.Path + #"\films\" + filmInfo.Titel + #"\";
Directory.CreateDirectory(thePath);
thePath += filmInfo.Titel + filmInfo.Extentie;
try
{
byte[] buffer = new byte[1024]; //1Kb buffer
long fileLength = filmInfo.TotalBytes;
long totalBytes = 0;
using (FileStream writer = new FileStream(thePath, FileMode.CreateNew, FileAccess.Write))
{
int currentBlockSize = 0;
bool more;
sw.WriteLine("DATA");
sw.Flush();
more = Convert.ToBoolean(sr.ReadLine());
while (more)
{
sw.WriteLine("SEND");
sw.Flush();
currentBlockSize = nws.Read(buffer, 0, buffer.Length);
totalBytes += currentBlockSize;
writer.Write(buffer, 0, currentBlockSize);
persentage = (double)totalBytes * 100.0 / fileLength;
Console.WriteLine(persentage.ToString());
sw.WriteLine("MORE");
sw.Flush();
string test = sr.ReadLine();
more = Convert.ToBoolean(test);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
There is a reason why Read() returns the number of bytes read: it's possible it will return less than the size of the buffer. Because of this, you should do something like nws.Write(buffer, 0, currentBlockSize); in SendData(). But this will break your protocol, because the blocks won't have the size anymore.
But I find it hard to believe your code actually behaves the way you describe. That's because Read() in GetMovieData() also may not fill the whole buffer. Also, StreamReader is allowed to keep some data in an internal buffer, which would mean you could read some completely bogus data.
I think code like this, where you're combining Streams and StreamReaders/StreamWriters is a really bad idea. It would be hard to make it actually correct. What you should do instead is to make your protocol completely byte-based (not character-based), even if those bytes are ASCII-encoded "SEND".
Let me give it a try, but don't shoot me if it doesn't work.
I see that you have a buffer size of 1024, regardless of how many bytes there are left in the file that you send. Say you have a file of 2900 bytes, which would require to send 3 times, the last you send there will only be 852 bytes left to send. Yet, you create a buffer of 1024 and send over 1024 bytes. This means that your server receives 852 bytes of real data, and 172 zero-filled bytes. Even though, all those 172 bytes are save to the movie file on the server.
I guess there's an easy fix: When you write the data to the server, use the currentBlockSize as argument for the length. So in method SendData on the client, inside the while loop, change:
nws.Write(buffer, 0, buffer.Length);
to this:
nws.Write(buffer, 0, currentBlockSize);

SslStream read all bytes

When i am try to read from SslStream function Read() is never end if i am not set connection timeout but if i do i've got timeout exception. This guy have the same problem http://msdn.microsoft.com/en-us/library/system.net.security.sslstream.read.aspx. I don't what to do here the code
public byte[] ReadBytes()
{
this.bufferGlobal.Clear();
byte[] buffer = new byte[this.bufferSize];
int recv = this.stream.Read(buffer, 0, buffer.Length);
while (recv != 0)
{
addBytes(buffer, ref bufferGlobal, recv);
recv = this.stream.Read(buffer, 0, buffer.Length);
}
return (byte[])this.bufferGlobal.ToArray(typeof(byte));
}
Thx in advance.
UPD:
i think i find the answer. I can set read timeout on SslStream equal one, thats value does not make sense for socket alive (its mean you can download huge files and don't worry about SslStream he wouldn't close connection or interupt recieve data). I just testing this solution but seems works fine. Thx everybody.
You are reading from a network stream, which means you will not encounter the end of stream until the other side closes its half of the connection. It isn't enough for it to stop sending data. So, make your other program close the connection after it has sent all data it intends to.
The stream won't end until the connection is closed. This is the nature of all streams of an unknown length.
You either need to know the length ahead of time or you need to keep reading until the connection is closed. A common way is to transmit the byte length of the stream as a 64bit int. So the first 8 bytes of your stream is read into an int64, then the rest is the data.
Typically one reads in a stream one "buffer" at a time.
pseudo code
while(!Stream.end)
{
i = Stream.Read(buffer, buffer.length)
DestStream.Write(buffer, i)
}
bit late but I am using msdn code with following modification
if (sb.ToString().IndexOf("a OK") != -1 || sb.ToString().IndexOf("a BAD") != -1 || sb.ToString().IndexOf("a NO") != -1)
{
break;
}
please use this working patch if it works for you:
int bytesRead = 0;
string chunkString = "";
do
{
byte[] responseBuffer = new byte[CHUNKSIZE];
bytesRead = sslStream.Read(responseBuffer, 0, responseBuffer.Length);
ApplicationHelper.ApplicationLogger.WriteInfoLog("Bytes Received " + bytesRead);
if (bytesRead > 0)
{
chunkString = Encoding.UTF8.GetString(responseBuffer, 0, bytesRead);
responseXML += chunkString;
}
}
while (!chunkString.EndsWith(EOF));

Data loss TCP IP C# [duplicate]

This question already has answers here:
Receiving data in TCP
(10 answers)
Closed 2 years ago.
Here's my code:
private void OnReceive(IAsyncResult result)
{
NetStateObject state = (NetStateObject)result.AsyncState;
Socket client = state.Socket;
int size = client.EndReceive(result);
byte[] data = state.Buffer;
object data = null;
using (MemoryStream stream = new MemoryStream(data))
{
BinaryFormatter formatter = new BinaryFormatter();
data = formatter.Deserialize(stream);
}
//todo: something with data
client.BeginReceive(
state.Buffer,
0,
NetStateObject.BUFFER_SIZE,
SocketFlags.None,
OnReceive,
state
);
}
state.Buffer has a maximum size of NetStateObject.BUFFER_SIZE (1024). Firstly, is this too big or too small? Second, if I send something larger than that, my deserialize messes up because the object it is trying to deserialize doesnt have all the information (because not all the data was sent). How do I make sure that all my data has been received before I try to construct it and do something with it?
Completed Working Code
private void OnReceive(IAsyncResult result)
{
NetStateObject state = (NetStateObject)result.AsyncState;
Socket client = state.Socket;
try
{
//get the read data and see how many bytes we received
int bytesRead = client.EndReceive(result);
//store the data from the buffer
byte[] dataReceived = state.Buffer;
//this will hold the byte data for the number of bytes being received
byte[] totalBytesData = new byte[4];
//load the number byte data from the data received
for (int i = 0; i < 4; i++)
{
totalBytesData[i] = dataReceived[i];
}
//convert the number byte data to a numan readable integer
int totalBytes = BitConverter.ToInt32(totalBytesData, 0);
//create a new array with the length of the total bytes being received
byte[] data = new byte[totalBytes];
//load what is in the buffer into the data[]
for (int i = 0; i < bytesRead - 4; i++)
{
data[i] = state.Buffer[i + 4];
}
//receive packets from the connection until the number of bytes read is no longer less than we need
while (bytesRead < totalBytes + 4)
{
bytesRead += state.Socket.Receive(data, bytesRead - 4, totalBytes + 4 - bytesRead, SocketFlags.None);
}
CommandData commandData;
using (MemoryStream stream = new MemoryStream(data))
{
BinaryFormatter formatter = new BinaryFormatter();
commandData = (CommandData)formatter.Deserialize(stream);
}
ReceivedCommands.Enqueue(commandData);
client.BeginReceive(
state.Buffer,
0,
NetStateObject.BUFFER_SIZE,
SocketFlags.None,
OnReceive,
state
);
dataReceived = null;
totalBytesData = null;
data = null;
}
catch(Exception e)
{
Console.WriteLine("***********************");
Console.WriteLine(e.Source);
Console.WriteLine("***********************");
Console.WriteLine(e.Message);
Console.WriteLine("***********************");
Console.WriteLine(e.InnerException);
Console.WriteLine("***********************");
Console.WriteLine(e.StackTrace);
}
}
TCP is a stream protocol. It has no concept of packets. A single write call can be sent in multiple packets, and multiple write calls can be put into the same packet. So you need to implement your own packetizing logic on top of TCP.
There are two common ways to packetize:
Delimiter characters, this is usually used in text protocols, with the new-line being a common choice
Prefix the length to each packet, usually a good choice with binary protocols.
You store the size of a logical packet at the beginning of that packet. Then you read until you received enough bytes to fill the packet and start deserializing.
How do I make sure that all my data has been received before I try to construct it and do something with it?
You have to implement some protocol so you know.
While TCP is reliable, it does not guarantee that the data from single write at one end of the socket will appear as a single read at the other end: retries, packet fragmentation and MTU can all lead to data being received in different sized units by the receiver. You will get the data in the right order.
So you need to include some information when sending that allows the receiver to know when it has the complete message. I would also recommend including what kind of message and what version of the data (this will form the basis of being able to support different client and server versions together).
So the sender sends:
- Message type
- Message version
- Message size (in bytes)
And the receiver will loop, performing a read with a buffer and appending this to a master buffer (MemoryStream is good for this). Once the complete header is received it knows when the complete data has been received.
(Another route is to include some pattern as an "end of message" marker, but then you need to handle the same sequence of bytes occurring in the content—hard to do if the data is binary rather than text.)

Categories

Resources