How to send length of a package via tcp/ip protocol - c#

I'm doing this for one of my school projects. I'm trying to design a multi-threaded server that accepts clients for working with a database (adding, deleting records etc). When I connect the client to the server I want to receive all the students in my database.
I access the database on the Server Side and store the information in an ArrayList, which I'm trying to send it over the network. I don't have any knowledge on XMLserializing so I'm trying to send each string in the arrayList to the client. When I send the data from the server, I sometimes receive all the data in the same time, sometimes I don't, so my first guess was that I have to split the data I send into packages of some length. I don't see how can I add the length at the beginning of a package. Wouldn't it be the same thing? Maybe I get the correct length maybe I don't.
Here is my code; I didn't try sending the length of each package yet, because I have no idea how. I tried sending from the server the length of the arraylist, and read from the network stream that many times, but it doesn't work ( I receive all data in one package).
Server side:
private void HandleClient(object client)
{
try
{
ClientNo++;
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] bytes = new byte[4096];
int i;
// Robot r = new Robot();
Protocol p = new Protocol();
ArrayList ListaStudentiResponse = p.ExecuteQueryOnStudents("select * from studenti");
byte[] Length = new byte[4];
Length = System.Text.Encoding.ASCII.GetBytes(ListaStudentiResponse.Count.ToString());
clientStream.Write(Length, 0, Length.Length);
foreach ( String s in ListaStudentiResponse)
{
byte[] data = System.Text.Encoding.ASCII.GetBytes(s);
clientStream.Write(data, 0, data.Length);
}
tcpClient.Close();
ClientNo--;
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
On Client:
private void connectToServerToolStripMenuItem_Click(object sender, EventArgs e)
{
tcpclient = new TcpClient();
NetworkStream netStream;
try
{
tcpclient.Connect("localhost", 8181);
netStream = tcpclient.GetStream();
Byte[] bytes = new Byte[10000];
int readBytes = netStream.Read(bytes, 0, bytes.Length);
int Length =Int32.Parse(Encoding.ASCII.GetString(bytes, 0, readBytes));
MessageBox.Show(Length.ToString());
int i = 0;
while (i < Length)
{
i++;
Byte[] b = new Byte[10000];
readBytes = netStream.Read(bytes, 0, bytes.Length);
String response = Encoding.ASCII.GetString(b, 0, readBytes);
MessageBox.Show(response);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
}

You can use a StateObject to keep track of how large your data is, and then test during ReadCallback to see if you have "all" of the message. If you don't have all of your message, you call BeginRecieve again with the current StateObject.
Here is a decent example: http://www.csharphelp.com/2007/02/asynchronous-server-socket-using-c/

This is what I been using:
How to use the buffer on SocketAsyncEventArgs object
Look at the accepted answer. 1st off, this is using something call completion port which is highly efficient than async. Secondly, I find that it is very easy to troubleshoot by looking at e.SocketError to find out the exact cause for failure.
How it works is that for your message to send, append the message with a header and trailer.
So when it receive the message, it will check if the trailer is received. If trailer not received, it will continue receive for that client, and append the received message to the stringBuilder object. Once the trailer is received, just call stringbuilder.toString() to get the whole content.

Related

How can I send multiple string messages from client to server using a single instance of TcpClient?

I have separate client and server console apps. I'm simply trying to send a string from the client to the server and have the server write the string to the console using TcpClient. I can send a single message just fine but when I throw a while loop into the client app to try and send multiple messages without closing the TcpClient, the server doesn't write anything to the console.
//Server
using (TcpClient client = listener.AcceptTcpClient())
{
NetworkStream ns = client.GetStream();
byte[] buffer = new byte[1024];
while (true)
{
if (ns.DataAvailable)
{
int bytesRead = 0;
string dataReceived = "";
do
{
bytesRead = ns.Read(buffer, 0, buffer.Length);
dataReceived += Encoding.UTF8.GetString(buffer, 0, bytesRead);
}
while (bytesRead > 0);
Console.WriteLine($"Message:{ dataReceived }\n");
}
}
}
//Client
using (TcpClient client = new TcpClient(hostname, port))
{
if (client.Connected)
{
Console.WriteLine("Connected to server");
NetworkStream ns = client.GetStream();
string message = "";
//Removing this while loop I can send a single message that the server will write to console
//but with the loop present the server does not write anything
while (true)
{
message = Console.ReadLine();
byte[] messageBytes = UTF8Encoding.UTF8.GetBytes(message);
ns.Write(messageBytes);
Console.WriteLine($"Message Sent! ({ messageBytes.Length } bytes)");
}
}
}
I'm interested in learning sockets and have been pouring over SO questions and MSDN docs for two days but cannot figure out why it's not working as I intend. I feel a bit silly even submitting a question because I'm sure it's something basic I'm not understanding. Could someone please drop some knowledge on me?
SOLUTION
//Server
using (TcpClient client = listener.AcceptTcpClient())
{
NetworkStream ns = client.GetStream();
StreamReader sr = new StreamReader(ns);
string message = "";
while (true)
{
message = sr.ReadLine();
Console.WriteLine(message);
}
}
//Client
using (TcpClient client = new TcpClient(hostname, port))
{
if (client.Connected)
{
Console.WriteLine("Connected to server");
NetworkStream ns = client.GetStream();
StreamWriter sw = new StreamWriter(ns) {Autoflush = true};
string message = "";
while (true)
{
message = Console.ReadLine();
sw.WriteLine(message);
}
}
}
If you debug your server, you'll see that it does receive data. You're just not displaying the data, because the only output your server does is after the loop when the byte count returned is 0: Console.WriteLine($"Message:{ dataReceived }\n");. The byte count will only be 0 when the underlying socket has been shutdown. That never happens because your client is stuck in an infinite loop.
A better approach, for a simple text-based client/server example like this, is to use StreamWriter and StreamReader with line-based messages, i.e. WriteLine() and ReadLine(). Then the line breaks serve as the message delimited, and your server can write the message each time it receives a new line.
Note also that in your example above, you are assuming that each chunk of data contains only complete characters. But you're using UTF8 where characters can be two or more bytes, and TCP doesn't guarantee how bytes that are sent are grouped. Using StreamWriter and StreamReader will fix this bug too, but if you wanted to do it explicitly yourself, you can use the Decoder class, which will buffer partial characters.
For some examples of how to correctly implement a simple client/server network program like that, see posts like these:
.NET Simple chat server example
C# multithreading chat server, handle disconnect
C# TcpClient: Send serialized objects using separators?

Unable to retrieve message from TCP/IP server C# console app

I have to create a TCP/IP client against a existing server (which has a specific documentation), I followed same but still no response getting from server.
Initially it was the problem of Message format and SMTP Commands, which i replaced from another working command.
I have to use SSL without client certificate for login.
TcpClient client = new TcpClient("Server DNS", PORT);
Console.WriteLine("-------------------------- \nConnection is : " + client.Connected);
SslStream stream = new SslStream(client.GetStream(), false, VerifyServerCertificate, null);
stream.AuthenticateAsClient("Server DNS");
Console.Write("Authentication status :" + stream.IsAuthenticated);
// FOR NOW I AM ONLY SENDING LOGIN COMMAND
string line = "BDAT 30 LAST<EOL>{login command}<LF>useridpassword";
stream.Write(Encoding.UTF8.GetBytes(line));
stream.Flush();
string serverMessage = ReadMessage(stream);
Console.WriteLine("\nServer says: {0}", serverMessage);
stream.Close();
This sends the command but return message is always empty.
here is method i am using for ReadMessage.
static string ReadMessage(SslStream sslStream)
{
// Read the message sent by the client.
// The client signals the end of the message using the
// "<EOF>" marker.
byte[] buffer = new byte[2048];
StringBuilder messageData = new StringBuilder();
int bytes = -1;
sslStream.ReadTimeout = 60000;
do
{
// Read the client's test message.
bytes = sslStream.Read(buffer, 0, buffer.Length);
sslStream.Flush();
// Use Decoder class to convert from bytes to UTF8
// in case a character spans two buffers.
Decoder decoder = Encoding.UTF8.GetDecoder();
char[] chars = new char[decoder.GetCharCount(buffer, 0, bytes)];
decoder.GetChars(buffer, 0, bytes, chars, 0);
messageData.Append(chars);
// Check for EOF or an empty message.
if (messageData.ToString().IndexOf("<EOF>") != -1)
{
break;
}
} while (bytes != 0);
return messageData.ToString();
}
Below is the screenshot for same code
console error
I am also getting this error when use break points
Sorry for this long post but i think these info was necessary.
Please any help is appreciated .. Its very frustrating situation as i couldn't proceed since last 8 days bcz of it.
Thank you.

TCP listener data type conversion

I'm developing one TCP server application with TCP listener class. here my server application getting data on every second from client.
Client sending data in a predefined format. received data contains 15 messages separated by "\0". e.g "12\012345\012.12\0" and so on. after getting data i split data and convert it in to string array. so i have string array of 15 elements. after that each element get converted to specific data type and whole record goes in to database.
here data send/receive happens on every second. the problem i'm facing is my application not sending data on every second to client application.
when i remove data type conversion code all is working well as expected. but with conversion code its took more milliseconds and my application not able to send data back to client in time.
below is my code. if i remove data type conversion code from "MapVariables" function its working well.
Please please can any one help me on this?
private async void ProcessClient(TcpClient tcpClient, CancellationToken ct)
{
try
{
while (!ct.IsCancellationRequested)
{
var stream = tcpClient.GetStream();
var buffer = new byte[bufferSize];
var amountRead = await stream.ReadAsync(buffer, 0, bufferSize);
var message = Encoding.ASCII.GetString(buffer, 0, amountRead);
string[] dataFromClient = Code.Common.SplitByLength(message, messageSize).ToArray();
dataFromClient = dataFromClient.Select(x => ParseMessage(x)).ToArray();
common.MapVariables(dataFromClient);
string serverResponse = string.Join(", ", dataFromClient);
//Byte[] sendBytes = Encoding.ASCII.GetBytes(serverResponse);
Byte[] sendBytes = Encoding.ASCII.GetBytes(message);
await stream.WriteAsync(sendBytes, 0, sendBytes.Length, ct);
stream.Flush();
}
}
catch (System.IO.IOException ex)
{
//loge exception
}
catch (Exception ex)
{
//loge exception
}
}
public void MapVariables(string[] variables)
{
Variables.Variable1 = int.Parse(variables[0]);
Variables.Variable2 = int.Parse(variables[1]);
Variables.Variable3 = int.Parse(variables[2]);
Variables.Variable4 = int.Parse(variables[3]);
Variables.Variable5 = int.Parse(variables[4]);
Variables.Variable6 = int.Parse(variables[5]);
Variables.Variable7 = int.Parse(variables[6]);
Variables.Variable8 = decimal.Parse(variables[7]);
Variables.Variable9 = decimal.Parse(variables[8]);
Variables.Variable10 = decimal.Parse(variables[9]);
Variables.Variable11 = decimal.Parse(variables[10]);
Variables.Variable12 = int.Parse(variables[11]);
Variables.Variable13 = int.Parse(variables[12]);
Variables.Variable14 = decimal.Parse(variables[13]);
Variables.Variable15 = decimal.Parse(variables[14]);
InsertintoDatabase();
}
Looking at the code, you just send back what they sent you except comma delimited.
You could move your MapVariables(...) call into a separate thread and use that. My guess is that your InsertintoDatabase call is the true bottleneck.
You could also attempt to just move MapVariables(...) to be done after you reply since it doesn't look like it will affect anything by doing it afterwards.

Proper way to receive network data in loop

I am writing a C# client application which will connect to the server written in python. My question is about receiving data in loop. The application structure is all about client asks server -> server responds to client. Everything works fine when the message is lower that actual buffer size (set in server). For example: server side buffer: 1024, client buffer size: 256, data length < 1kb. I run my application with following code:
int datacounter = 0;
byte[] recived = new byte[256];
StringBuilder stb = new StringBuilder();
serverStream.ReadTimeout = 1500;
try
{
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
Console.WriteLine("RECIVED: " + datacounter.ToString());
stb.append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
}
}
catch { Console.WriteLine("Timeout!"); }
Then the application receives data in 4 loops (256 bytes each):
RECIVED: 256
RECIVED: 256
RECIVED: 256
RECIVED: 96
And then the timeout ticks, that ends the transmission and pass the complete data to later analysis (from stb object). I don't think using timeout is proper, but i don't know any other way to do this.
However, this way it works. Here we go with example, that does not:
server side buffer: 1024, client side buffer: 256, data length ~ 8kbytes (python side sends data in loop).
RECIVED: 256
RECIVED: 256
RECIVED: 256
RECIVED: 256
Then the timeout ticks (and obviosly the data is incomplete - got 1kb of 8kb). Sometimes the loop even ends after 1 run with 28 recived bytes and thats all before timeout. Python says that the data has been send properly. Here's the way i create the socket and serverStream object:
TcpClient clientSocket = new TcpClient();
clientSocket.Connect("x.y.z.x", 1234);
NetworkStream serverStream = clientSocket.GetStream();
Its not the TcpClient fault. Tried the same with clear sockets, created like:
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
However that works similar. Is there a way, to make my loop work without timeout, receiving all data? I would like to keep the socket synchronous if possible.
I don't think there's anything wrong with your receive code functionally. I put together a test, and the receiver gets as much as you can send it (eg. 8 MBs), as long as you keep sending without 1.5 seconds pause, before timing out.
So it looks like your server is simply not sending "fast" enough.
To answer your question, timing is not the typical way of knowing when you have received a full message. One common, simple way of determining when a full message is received is to prefix the length of the full message on the sending side (eg. 4-byte int). Then on the receive side, first read 4 bytes, decode to the length, and then read that many more bytes.
You could also consider appending a message termination string, such as Environment.NewLine, to the end of your message. This has the advantage that you could call StreamReader.ReadLine(), which will block until the full message is received. This only works if the termination can NOT be included in the message itself.
If you can't alter the server protocol, is there any other way of knowing you have received a full message? (eg. checking for a NewLine at the end of the message, an XML end tag, or some other pattern.) If not, perhaps you could wait for the server to disconnect, otherwise it looks like you would be forced to find the right timing balance.
I am including the test code below in case you want to play around with it.
Server/Sending Side:
IPAddress localAddr = IPAddress.Parse("127.0.0.1");
TcpListener server = new TcpListener(localAddr, 13579);
server.Start();
TcpClient clientSocket = server.AcceptTcpClient();
NetworkStream stream = clientSocket.GetStream();
int bytesSent = 0;
int bytesToSend = 1 << 25;
int bufferSize = 1024;
string testMessage = new string('X', bufferSize);
byte[] buffer = UTF8Encoding.UTF8.GetBytes(testMessage);
while (bytesSent < bytesToSend)
{
int byteToSendThisRound = Math.Min(bufferSize, bytesToSend - bytesSent);
stream.Write(buffer, 0, byteToSendThisRound);
bytesSent += byteToSendThisRound;
}
Client/Receiving Side:
TcpClient client = new TcpClient("127.0.0.1", 13579);
NetworkStream serverStream = client.GetStream();
int totalBytesReceived = 0;
int datacounter = 0;
byte[] recived = new byte[256];
StringBuilder stb = new StringBuilder();
serverStream.ReadTimeout = 1500;
try
{
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
totalBytesReceived += 256;
Console.WriteLine("RECIVED: {0}, {1}", datacounter, totalBytesReceived);
stb.Append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
}
}
catch { Console.WriteLine("Timeout!"); }
Why don't you dump the exception that makes your code go into the catch branch and find out? :)
catch (Exception ex) { Console.WriteLine("Timeout because of... " + ex.Message); }
--EDIT
Sorry I didn't see the timout. The question you're asking is if there's a way to do it without timeout. Yes, don't set any timeout and check if received number of bytes is smaller than the buffer size.
That is:
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
Console.WriteLine("RECIVED: " + datacounter.ToString());
stb.append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
if(datacounter < 256) //you're good to go
break;
}
For anyone else who needs help with this
Just to add to Chi_Town_Don's answer, make sure you use stb.ToString() outside of the loop. And I've found that nothing will print out unless the loop breaks out. To do that if(!serverStream.DataAvailable()){break;} works wonders. That way you don't need to pass in the packet size or some other convoluted condition.

TCP Framing with Binary Protocol

Hey, I'm having an issue seperating packets using a custom binary protocol.
Currently the server side code looks like this.
public void HandleConnection(object state)
{
TcpClient client = threadListener.AcceptTcpClient();
NetworkStream stream = client.GetStream();
byte[] data = new byte[4096];
while (true)
{
int recvCount = stream.Read(data, 0, data.Length);
if (recvCount == 0) break;
LogManager.Debug(Utility.ToHexDump(data, 0, recvCount));
//processPacket(new MemoryStream(data, 0, recvCount));
}
LogManager.Debug("Client disconnected");
client.Close();
Dispose();
}
I've been watching the hex dumps of the packets, and sometimes the entire packet comes in one shot, let's say all 20 bytes. Other times it comes in fragmented, how do I need to buffer this data to be able to pass it to my processPacket() method correctly. I'm attempting to use a single byte opcode header only, should I add something like a (ushort)contentLength to the header aswell? I'm trying to make the protocol as lightweight as possible, and this system won't be sending very large packets(< 128 bytes).
The client side code I'm testing with is, as follows.
public void auth(string user, string password)
{
using (TcpClient client = new TcpClient())
{
client.Connect(IPAddress.Parse("127.0.0.1"), 9032);
NetworkStream networkStream = client.GetStream();
using (BinaryWriter writer = new BinaryWriter(networkStream))
{
writer.Write((byte)0); //opcode
writer.Write(user.ToUpper());
writer.Write(password.ToUpper());
writer.Write(SanitizationMgr.Verify()); //App hash
writer.Write(Program.Seed);
}
}
}
I'm not sure if that could be what's messing it up, and binary protocol doesn't seem to have much info on the web, especially where C# is involved. Any comment's would be helpful. =)
Solved with this, not sure if it's correct, but it seems to give my handlers just what they need.
public void HandleConnection(object state)
{
TcpClient client = threadListener.AcceptTcpClient();
NetworkStream stream = client.GetStream();
byte[] data = new byte[1024];
uint contentLength = 0;
var packet = new MemoryStream();
while (true)
{
int recvCount = stream.Read(data, 0, data.Length);
if (recvCount == 0) break;
if (contentLength == 0 && recvCount < headerSize)
{
LogManager.Error("Got incomplete header!");
Dispose();
}
if(contentLength == 0) //Get the payload length
contentLength = BitConverter.ToUInt16(data, 1);
packet.Write(data, (int) packet.Position, recvCount); //Buffer the data we got into our MemStream
if (packet.Length < contentLength + headerSize) //if it's not enough, continue trying to read
continue;
//We have a full packet, pass it on
//LogManager.Debug(Utility.ToHexDump(packet));
processPacket(packet);
//reset for next packet
contentLength = 0;
packet = new MemoryStream();
}
LogManager.Debug("Client disconnected");
client.Close();
Dispose();
}
You should just treat it as a stream. Don't rely on any particular chunking behaviour.
Is the amount of data you need always the same? If not, you should change the protocol (if you can) to prefix the logical "chunk" of data with the length in bytes.
In this case you're using BinaryWriter on one side, so attaching a BinaryReader to the NetworkStream returned by TcpClient.GetStream() would seem like the easiest approach. If you really want to capture all the data for a chunk at a time though, you should go back to my idea of prefixing the data with its length. Then just loop round until you've got all the data.
(Make sure you've got enough data to read the length though! If your length prefix is 4 bytes, you don't want to read 2 bytes and miss the next 2...)

Categories

Resources