I am new to C#, it is my first code :D I was codding in Java in the past and now I have a lot of troubles with sending sockets.
When I try to send a socket, it is only send at the total end of the code:
for example
using System;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.IO;
using System.Text;
namespace HelloWorld
{
class Program
{
public static TcpClient client;
public static NetworkStream networkStream;
public static void Main(string[] args)
{
client = new TcpClient("62.210.130.212", 35025);
networkStream = client.GetStream();
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
while (true)
{
// Infinte loop
}
// THE SOCKER WILL ONLY BE SENT HERE, AT THE END OF THE CODE, WHY ?!
}
}
}
Here the socket will only be sent when shutting down the programm (because of the wile loop)
and here
using System;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.IO;
using System.Text;
namespace HelloWorld
{
class Program
{
public static TcpClient client;
public static NetworkStream networkStream;
public static void Main(string[] args)
{
client = new TcpClient("62.210.130.212", 35025);
networkStream = client.GetStream();
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
}
}
}
And in this case it will be sent directly because it is the end of the code...
Does anyone know why this happens and how to fix it ? I want my sockets to be sent at any time of the running time, not just at the end :D
Thank you very much ! Thank you for reading this question until there.
Julien.
Does anyone know why this happens and how to fix it? I want my sockets to be sent at any time of the running time, not just at the end.
Why is this happening?
Due to Nagle's algorithm, the transmission will be delayed briefly (in the milliseconds).
Which means your code does not offer enough time before it goes to the loop.
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
// the time between here is not enough for nagle's algorithm to complete
while (true) { } // blocks the thread from doing any more work
// so, it will end up sending here instead.
How to fix it?
A Quick Solution - Disable Nagle's Algorithm
As documented in this answer you could set TcpClient.NoDelay to true to disable Nagle's algorithm.
You can accomplish that easily by adding it to your code:
client = new TcpClient("62.210.130.212", 35025);
client.NoDelay = true; // add this line of code to disable Nagle's algorithm
Note: This solution will in force an immediate send. However, as your question wasn't clear the on the exact requirement, please note that this is probably not the "best" solution.
A Better Solution - Force execution by disposing your resource(s)
If you dispose/close your TcpClient it will accomplish the same thing. In addition, it will provide the added (and much better practice) of disposing the resource immediately when you're done using it.
public static void Main(string[] args)
{
using (TcpClient client = new TcpClient("62.210.130.212", 35025))
using (NetworkStream networkStream = client.GetStream())
{
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
}
while (true) { } // loop infinitely
}
Note: This doesn't actually send the packet immediately like the solution above, but instead forces the packet to send because we are closing it. However, the result in your case is the same effect and this is a much better approach that you should practice.
What is Nagle's Algorithm?
The Nagle algorithm is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances.
A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic.On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segmentswhen new outgoing data arrives from the user if any previouslytransmitted data on the connection remains unacknowledged.
The majority of network applications should use the Nagle algorithm.
Setting this property on a User Datagram Protocol (UDP) socket will have no effect.
Related
I'm teaching myself some simple networking using Unity and Sockets and I'm running into problems synchronizing data between a client and server. I'm aware that there are other options using Unity Networking but, before I move on, I want to understand better how to improve my code using the System libraries.
In this example I'm simply trying to stream my mouse position over a multicast UDP socket. I'm encoding a string into a byte array and sending that array once per frame. I'm aware that sending these values as a string is un-optimal but, unless that is likely the bottleneck, I'm assuming It's ok come back to optimize that later.
Im my setup server is sending values at 60 fps, and the client is reading at the same rate. The problem I'm having is that when the client receives values it typically receives many at once. If I log the values I received with a ----- between each frame I typically get output like this:
------
------
------
------
------
------
------
119,396
91,396
45,391
18,379
-8,362
-35,342
-59,314
------
------
------
------
------
------
------
I would expect unsynchronized update cycles to lead to receiving two values per frame, but I'm not sure what might be accounting for the larger discrepancy.
Here's the Server code:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Server : MonoBehaviour
{
Socket _socket;
void OnEnable ()
{
var ip = IPAddress.Parse ("224.5.6.7");
var ipEndPoint = new IPEndPoint(ip, 4567);
_socket = new Socket (AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption (ip));
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.MulticastTimeToLive, 2);
_socket.Connect(ipEndPoint);
}
void OnDisable ()
{
if (_socket != null)
{
_socket.Close();
_socket = null;
}
}
public void Send (string message)
{
var byteArray = Encoding.ASCII.GetBytes (message);
_socket.Send (byteArray);
}
}
And the client:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Client : MonoBehaviour
{
Socket _socket;
byte[] _byteBuffer = new byte[16];
public delegate void MessageRecievedEvent (string message);
public MessageRecievedEvent messageWasRecieved = delegate {};
void OnEnable ()
{
var ipEndPoint = new IPEndPoint(IPAddress.Any, 4567);
var ip = IPAddress.Parse("224.5.6.7");
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.Bind (ipEndPoint);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(ip,IPAddress.Any));
}
void Update ()
{
while (_socket.Available > 0)
{
for (int i = 0; i < _byteBuffer.Length; i++) _byteBuffer[i] = 0;
_socket.Receive (_byteBuffer);
messageWasRecieved (Encoding.ASCII.GetString (_byteBuffer));
}
}
}
If anybody could shed light on what I can do to improve synchronization that would be a great help.
Network I/O is subject to a large number of external influences, and TCP/IP as a protocol has few requirements. Certainly none that would provide a guarantee of the behavior you seem to want.
Unfortunately, without a good Minimal, Complete, and Verifiable code example, it's not possible to verify that your server is in fact sending data at the interval you claim. It's entirely possible you have a bug that's causing this behavior.
But if we assume that the code itself is perfect, there are still no guarantees when using UDP that datagrams won't be batched up at some point along the way, such that a large number appear in the network buffer all at once. I would expect this to happen with higher frequency when the datagrams are sent through multiple network nodes (e.g. a switch and especially over the Internet), but it could just as easily happen when the server and client are both on the same computer.
Ironically, one option that might force the datagrams to be spread out more is to pad each datagram with extra bytes. The exact number of bytes required would depend on the exact network route; to do this "perfectly" might require writing some calibration logic that tries different padding amounts until the code sees datagrams arriving at the intervals it expects.
But that would significantly increase the complexity of your network I/O code, and yet still would not guarantee the behavior you'd like. And it has some obvious negative side-effects, including the extra overhead on the network (something people using metered network connections certainly won't appreciate), as well as increasing the likelihood of a UDP datagram being dropped altogether.
It's not clear from your question whether your project actually requires multicast UDP, or if that's just something in your code because that's what some tutorial or other example you're following was using. If multicast is not actually a requirement, another thing you definitely should try is to use direct UDP without multicasting.
FWIW: I would not implement this the way you have. Instead, I would use asynchronous receive operations, so that my client receives datagrams the instant they are available, rather than only checking periodically each frame of rendering. I would also include a sequence number in the datagram, and discard (ignore) any datagrams that arrive out of sequence (i.e. where the sequence number isn't strictly greater than the most recent sequence number already received). This approach will improve (perhaps only slightly) responsiveness, but also will handle situations where the datagrams arrive out of order, or are duplicated (two of the three main delivery issues one will experience with UDP…the third being, of course, failure of delivery).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
From some time now I'm trying to learn about Socketing programming in c#. I've been looking over many msdn, stackoverflow and codeproject projects, discussions and examples.
I rewrote them, I used debug and "step -in " in Visual Studio, I took it line by line and now I'm on "my little project." It is a mini - chat (console application). Let me describe the code and at the bottom I will give you my problems.
Server:
Main:
I've started a server, a TcpListener on ip 0.0.0.0 and port 8000.
I've created a thread on a method that accept my clients ( used 3 threads, this is one of them, the first ) and started it.
Method Accept Clients:
I've used a TcpClient to accept Tcp Clients from the TcpListener in a while(true).
I've got the stream out of that client and I've created a streamReader and a streamWriter over that stream.
I've started a thread, the second one, in which I've done the logic for writing.
And I've stared the 3rd thread in witch I've done the logic for the reading.
Here's the code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace TcpServer
{
class TcpServer
{
static TcpListener tcpL;
static TcpClient tcpC;
static NetworkStream nStream;
static StreamWriter sW;
static StreamReader sR;
static List<TcpClient> lTcp;
static void Main(string[] args)
{
Console.WriteLine(" >> Server Started");
tcpL = new TcpListener(IPAddress.Parse("0.0.0.0"), 8000);
tcpL.Start();
Thread accClients = new Thread(acceptClients);
int counter = 0;
accClients.Start(counter);
Console.ReadLine();
}
private static void acceptClients(object obj)
{
while (true)
{
tcpC = tcpL.AcceptTcpClient();
Console.WriteLine(" >> Client Connected");
nStream = tcpC.GetStream();
sW = new StreamWriter(nStream);
sR = new StreamReader(nStream);
Console.WriteLine(" >> Data Transfer Established");
Thread thWrite = new Thread(doWriteing);
thWrite.Start(sW);
Thread thRead = new Thread(doReading);
thRead.Start(sR);
}
}
private static void doWriteing(object obj)
{
StreamWriter sW = (StreamWriter)obj;
while (true)
{
sW.WriteLine(Console.ReadLine());
sW.Flush();
}
}
private static void doReading(object obj)
{
StreamReader sR = (StreamReader)obj;
while (true)
{
string linie;
try
{
linie = sR.ReadLine();
Console.WriteLine(linie);
}
catch (Exception)
{
continue;
}
}
}
}
}
And the Client:
This one is quite simple.
Main:
I've connected the TcpClient on ip 127.0.0.1 and port 8000.
I've got the stream out of the TcpClient.
Created a StreamWriter and a StreamReader over that stream.
Started 2 threads for Writing and Reading.
Here's the code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Sockets;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace TCPClient
{
class TCPClient
{
static TcpClient tcpC;
static NetworkStream nStream;
static void Main(string[] args)
{
Console.WriteLine(" >> Client Started");
tcpC = new TcpClient("127.0.0.1", 8000);
Console.WriteLine(" >> Client Connected");
nStream = tcpC.GetStream();
StreamWriter sW = new StreamWriter(nStream);
StreamReader sR = new StreamReader(nStream);
Thread thWrite = new Thread(doWriteing);
thWrite.Start(sW);
Thread thRead = new Thread(doReading);
thRead.Start(sR);
Console.ReadLine();
}
private static void doWriteing(object obj)
{
StreamWriter sW = (StreamWriter)obj;
while (true)
{
sW.WriteLine(Console.ReadLine());
sW.Flush();
}
}
private static void doReading(object obj)
{
StreamReader sR = (StreamReader)obj;
while (true)
{
string linie;
try
{
linie = sR.ReadLine();
Console.WriteLine(linie);
}
catch (Exception)
{
continue;
}
}
}
}
}
Now I have 3 big questions:
Why, or maybe "How", does the server send data to every thread( to only one thread at a time ) in the order they are created? Please explain this to me, I really want to understand the process. Here is a "sample"( it seems i can't post pictures ):
Serve* ( sending )
> >> Server Started
> >> Client Connected
> >> Data Transfer Established
> >> Client Connected
> >> Data Transfer Established
> >> Client Connected
> >> Data Transfer Established
.
1
2
3
4
5
6
Client 1 ( receiveing )
>> Client Started
>> Client Connected
1
4
Client 2 ( receiveing)
>> Client Started
>> Client Connected
2
5
Client 3 ( receiveing)
>> Client Started
>> Client Connected
3
6
How can I send the data received by the server to all clients ?
How can I store them ? What exactly to store ( and in what ) to be able to inform the server that this client id wants to send data to X client id. ( for example: client 1 wants to say "I was on the beach" to client 3 and "I was home" to client 2 )
/* I know there might appear many throws because I didn't programmed so defensively, but now I only want to learn and for me any exception that might appear can help. And I know that the first stream Flush doesn't arrive but in that case probably I've done something wrong, for that problem I'll be investigating. */
/* And important to mention: The Clients are chatting only with the server, they don't see what the others clients write. And the server receive the data in the correct order */
P.S. Please, if you take an interest in this problem, work with this code. I came here because I want to learn this things not because I search another "trick" or only to solve it.
I will try to answer the questions as best as I understand them:
Firstly you should properly add more debugging messages to your code, as this will greatly help you understand the process. Here is a pastebin where I have modified your code to include more messages so that you can better understand what read/write are doing.
Why, or maybe "How", does the server send data to every thread( to
only one thread at a time ) in the order they are created? Please
explain this to me, I really want to understand the process. Here is a
"sample"( it seems i can't post pictures ):
The server is not sending data to every thread. It is sending data to the Streams that are retrieved by the tcpC.GetStream(). In the acceptClients function, you get the network stream for that specific client (the one that connected). You then pass these to the new threads you create. Each read/write thread pair then has access to their own NetworkStream for working with the clients.
The reason you are properly confused, is because you overwrite tcpC and nStream with the values of other clients. The variables tcpC and nStream should really be a List. As for each client connected, there is a TcpClient and a NetworkStream.
In other words. On the server, for every client you have running there is 1 TcpClient, 1 NetworkStream and 2 Threads.
How can I send the data recived by the server to all clients ?
What you want to do is broadcast information received to all clients (like a chat program). To do this you will need a List<string> to act as a buffer. The means on your server in the doReading function you will read to a list (a buffer) that will store what you read from a client. You will then use the doWriteing function to write that list to clients. You will also need a way of making sure that the server sent to all clients the messages (not just to one of them).
client 1 wants to say "I was on the beach" to client 3
For this you will properly need a way of saying that in a command line format. Something like
SEND ID TEXT
Where ID is the client # you wish to send, and every after is the message. You can then also use something like BROADCAST TEXT for broadcasting to all clients.
Pastebin link again: http://pastebin.com/SBjc6tBF
When a client connects to the server, add the client to a collection. When the server receives data that it needs to pass to all other connected clients, iterate over the collection of clients and send them the data. You will need to check if the client is still connected. Broadcasting a message to all connected clients is easier than private messaging. You simply replay the message from one client to all clients as described above. For private messaging, clients will need to send commands to the server, which indicates a clients intentions. For example a simple command might be
SEND "Hello John" TO CLIENT 3
First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.
Using C#, does broadcasting over UDP repeatedly send its packet, or just once?
I've never used this technology before and I want to temporarily broadcast small bit of info (a small one line string) over the LAN. If the receiving end isn't ready will the broadcast repeat itself or was it a one time thing? The code I'm using is from here. So what I want to start the Broadcaster one one machine and a few minutes later start the receiver and retrieve what the broadcaster sent.
Here is the code
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
class Broadcst
{
public static void Main()
{
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram,
ProtocolType.Udp);
IPEndPoint iep1 = new IPEndPoint(IPAddress.Broadcast, 9050);
IPEndPoint iep2 = new IPEndPoint(IPAddress.Parse("192.168.1.255"), 9050);
string hostname = Dns.GetHostName();
byte[] data = Encoding.ASCII.GetBytes(hostname);
sock.SetSocketOption(SocketOptionLevel.Socket,SocketOptionName.Broadcast, 1);
sock.SendTo(data, iep1);
sock.SendTo(data, iep2);
sock.Close();
}
}
UDP by design only sends a packet once. It has no concept of handshakes (unlike TCP), error correction, or transmission guarantees. You can't even be sure that your data gets to where you send it unless you manually request checksums or something like that.
Wikipedia has a nice section on this: Reliability and congestion control solutions in UDP.
So, yes, you will need to implement transmission guarantee code if you want reliability. But what if the message from the recipient saying that the data was received is delayed? Well, then you need to implement some kind of timeout. What if the message gets lost? You need to resend the data to the recipient. How do you know if the recipient gets it this time? Etc...
If you don't want to do this, then I'd suggest looking into TCP which automatically manages this for you.
I am using the following code (built from answers to my previous questions on SO):
public void Start()
{
listener = new TcpListener(IPAddress.Any, 9030);
listener.Start();
Console.WriteLine("Listening...");
StartAccept();
}
private void StartAccept()
{
listener.BeginAcceptTcpClient(HandleAsyncConnection, listener);
}
private void HandleAsyncConnection(IAsyncResult res)
{
StartAccept();
TcpClient client = listener.EndAcceptTcpClient(res);
StringBuilder sb = new StringBuilder();
var data = new byte[client.ReceiveBufferSize];
using (NetworkStream ns = client.GetStream())
{
int readCount;
while ((readCount = ns.Read(data, 0, client.ReceiveBufferSize)) != 0)
{
sb.Append(Encoding.UTF8.GetString(data, 0, readCount));
}
// Do work
// Test reply
Byte[] replyData = System.Text.Encoding.ASCII.GetBytes(DateTime.Now.ToString());
ns.Write(replyData, 0, replyData.Length);
ns.Flush();
ns.Close();
}
client.Close();
}
The line "Do work" represents where I will do the required processing for my client.
However I can't see how to use this code to read the client's data and then reply to it. When using this code I can read perfectly what is sent by my client, however once that occurs the client locks up and eventually complains that the connection was terminated. It does not receive my reply.
Any ideas on how to fix this?
Okay, first of all, you are mixing asynchronous calls (BeginAcceptTcpClient) and synchronous (Read and Write) calls. That completely kills the purpose of asynchronous code. Second, maybe this is why your socket gets closed ? Performing a sync op on an async socket. I'm not sure, but without the client code it's impossible to tell.
Anyway, this is NOT how you build an asynchronous, multi-client server.
Here is a fully asynchronous server implementation : http://msdn.microsoft.com/en-us/library/fx6588te.aspx
i think you should use a length byte to alloc your buffer. ReceiveBufferSize could be called multiple times, i think there is no gurantee you receive everything in one block.
You have misunderstood how Read works. It blocks until something is received from the other end point. The only time it returns 0 is when the other side have disconnected, hence you will continue reading until the other side disconnects.
When using TCP you need to know when a message ends. You can do that either by sending the message length first as a header or by using a suffix (as a line feed) after each message. Then you should keep reading until the complete message has arrived.