Related
First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.
I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.
I am doing some basic Socket messaging. I have a routine that works well but there is a problem under load.
I'm using UDP to do a connectionless SendTo to basically do a ping-like operation to see if any of my listeners are out there on the LAN. Ideally I would just use the broadcast address, but Wireless routers don't seem to relay my broadcast. My work around is to iterate through all IPs on the Subnet and send my data gram to each IP. The other PCs are listening and if they get the message they will reply and that is how I get Peers to find each other. Here is the code that is in the loop which sends the data gram to each IP in the subnet.
string msgStr = "some message here...";
byte[] sendbuf = Encoding.ASCII.GetBytes(msgStr);
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Blocking = true;
socket.SendTo(sendbuf, remoteEndPt);
//socket.Close();
This works, but when the Subnet range is large, say 255.255.0.0 (meaning ~60,000 IPs to iterate through) I will eventually get a SocketException with error code "10022", meaning "Invalid Argument". This tends to happen after ~10,000 or so successful sends then I start to see this error. Also, the router I use at work handles it and is presumably a high powered router, but the cheap-o one in my lab is the one that produces the error.
If I put in a wait time after catching the SocketException and before resuming the loop it will typically recover but eventually I'll get the error again.
I think what is happening is that the buffer on the router gets full and I cannot send anymore data. The higher quality one at work can handle it but the cheap-o one gets bogged down. Does that sound plausible?
A couple questions:
1) When using SendTo in a connectionless manner, do I need to call Close() on my Socket?
I've haven't seen any benefit in calling Close(), but when I do call Close() it severely slows down my iteration (I have it commented out above because it does slow things down a lot). Does this make sense?
2) Is there a way for me to tell I should wait before trying to send more data? It doesn't seem right to just catch the Exception which I still don't know what the cause of it is.
Thanks, J.
I am not sure that is the router only but I suspect that you are also running into some limit in the OS...
Any reason you are creating the Socket every time you send ?
Just reuse it...
Anyways according to http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx it is a good idea to call Shutdown() and then Close() on the Socket... perhaps not with every send but every 255 IPs or so...
Checkout UdpClient - that could make implementation easier / more robust
EDIT - as per comment:
IF you want a Socket reuse "cache"... this for example would make sure that a specific Socket is only used every 256 checks...
// build/fill your Socket-Queue for example in the con
class SocketExample
{
Queue<Socket> a = new Queue<Socket>();
SocketExample ()
{
int ii = 0, C = 256;
for (ii = 0; ii < C; C++)
{
a.Enqueue (new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp));
}
}
// in your function you just dequeue a Socket and use it,
// after you are finished you enqueue it
void CheckNetIP (some parameters...)
{
Socket S = a.Dequeue();
// do whatever you want to do...
// IF there is no exception
a.Enqueue(S);
}
}
I've been fighting with this issue for a day and I can't find answer for it.
I am trying to read data from GPS device trough COM port in Compact Framework C#. I am using SerialPort class (actually my own ComPort class boxing SerialPort, but it adds only two fields I need, nothing special).
Anyway, I am running while loop in a separate thread which reads line from the port, analyze NMEA data, print them, catch all exceptions and then I Sleep(200) the thread, because I need CPU for other threads... Without Sleep it works fine, but uses 100% CPU.. When I don't use Sleep after few minutes the output from COM port looks like this:
GPGSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
GSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
A,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
,18,12,271,24,24,05,020,24,14,04,326,25,11,03,023,*76
A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F
as you can see the same message is read few times but cut.
I wonder what I'm doing wrong...
My port configuration:
port.ReadBufferSize = 4096;
port.BaudRate = 4800;
port.DataBits = 8;
port.Parity = Parity.None;
port.StopBits = StopBits.One;
port.NewLine = "\r\n";
port.ReadTimeout = 1000;
port.ReceivedBytesThreshold = 100000;
And my reading function:
private void processGps(){
while (!closing)
{
//reconnect if needed
try
{
string sentence = port.ReadLine();
//here print the sentence
//analyze the sentence (this takes some time 50-100ms)
}
catch (TimeoutException)
{
Thread.Sleep(0);
}
catch (IOException ioex)
{
//handling IO exception (some info on the screen)
}
Thread.Sleep(200);
}
}
There is some more stuff in this function like reconnection if the device is lost etc., but it is not called when the GPS is connected properly. I was trying
port.DiscardInBuffer();
after some blocks of code (in TimeoutException, after read.)
Did anyone had similar problem? I really dont know what I'm doing wrong.. The only way to get rig of it is removing the last Sleep.
For all those who have similar problem. The first issue was about overflowing the buffer. I had 4096 size of buffer and the data was just flowing trough it so I was reading corrupted sentences. Now I read all buffer at once and analyze it. First sentence is sometimes corrupted, but the rest is ok.
The second thing was the device issue. Tom Tom MkII sometimes loses connection with the device. I had to restart the GPS and find it again in Bt devices list.
Regards
There's nothing in your post to say how you are doing handshaking.
Normally you would use software (XON/XOFF) or hardware (e.g. RTS/CTS) handshaking so that the serial port will tell the transmitting to stop when it is unable to receive more data. The handshaking configuration must (of course) match the configuration of the transmitting device.
If you fail to configure handshaking correctly, you may get away with it as long as you are processing the data fast enough - but when you have a Sleep, data may be lost.
I read some C# chat source code & I see that: on chat server with a lot of connected clients, server listener will run in a separated thread & each connected client will also run in a separated thread.
Code examples:
Start server & begin listening in a separated thread:
public void StartListening()
{
// Get the IP of the first network device, however this can prove unreliable on certain configurations
IPAddress ipaLocal = ipAddress;
// Create the TCP listener object using the IP of the server and the specified port
tlsClient = new TcpListener(1986);
// Start the TCP listener and listen for connections
tlsClient.Start();
// The while loop will check for true in this before checking for connections
ServRunning = true;
// Start the new tread that hosts the listener
thrListener = new Thread(KeepListening);
thrListener.Start();
}
private void KeepListening()
{
// While the server is running
while (ServRunning == true)
{
// Accept a pending connection
tcpClient = tlsClient.AcceptTcpClient();
// Create a new instance of Connection
Connection newConnection = new Connection(tcpClient);
}
}
And a connection will also run in a separated thread:
public Connection(TcpClient tcpCon)
{
tcpClient = tcpCon;
// The thread that accepts the client and awaits messages
thrSender = new Thread(AcceptClient);
// The thread calls the AcceptClient() method
thrSender.Start();
}
So, if a chat server with 10000 connected clients, the chat server application will have 10002 threads (one main thread, one server thread & 10000 client threads). I think the chat server will be overhead with a big number of threads. Please help me a solution. Thanks.
UPDATE:
I believe chat examples are only for learning networking & they are not suitable in real-world model. Please give me a real-world solution. Thanks.
If you use .Net framework 2.0 SP2 or higher, than you may use new asyncrhronous sockets model based on IO Completion ports. In this case you shouldn't create your own threads, because IO Completion ports do all job for you.
Here some examples:
tcpServer = new System.Net.Sockets.TcpListener(IPAddress.Any, port);
tcpServer.Start();
tcpServer.BeginAcceptSocket(EndAcceptSocket, tcpServer);
private void EndAcceptSocket(IAsyncResult asyncResult)
{
TcpListener lister = (TcpListener)asyncResult.AsyncState;
Socket sock = lister.EndAcceptSocket(asyncResult);
//handle socket connection (you may add socket to you internal storage or something)
//start accepting another sockets
lister.BeginAcceptSocket(EndAcceptSocket, lister);
SocketAsyncEventArgs e = new SocketAsyncEventArgs();
e.Completed += ReceiveCompleted;
e.SetBuffer(new byte[socketBufferSize], 0, socketBufferSize);
sock.ReceiveAsync(e);
}
void ReceiveCompleted(object sender, SocketAsyncEventArgs e)
{
var sock = (Socket)sender;
if (!sock.Connected)
{
//handle socket disconnection
}
var buf = new byte[size];
Array.Copy(e.Buffer, buf, size);
//handle received data
//start reading new data
sock.ReceiveAsync(e);
}
A standard mechanism to ease the burden is known as selection, which can multiplex multiple Socket instances to watch for the ones that are ready to be read or written to. See this document: http://codeidol.com/csharp/csharp-network/Csharp-Network-Programming-Classes/Csharp-Socket-Programming/ and scroll down to the section on select().
1) You'll NEVER want that many threads running - even if you could get them to run on your box (which you can't - each thread has a stack associated with it that takes real RAM and as you start more and more and more you'll run out of physical resources in your box and watch it blow up).
2) You'll want to look into thread pooling - using a smaller amount of threads to tackle a larger amount of work - typically reading from a queue of work that you try to get through as quickly as possible.
3) You'll want to look into io completion ports - a means of having a callback when io (likek a disk read or a network io) is waiting for you to take action - think of a thread (or pool of threads) dedicated to getting io notifications and then shoving the action to take for that io into a queue and then another pool of threads that take care of the actual messaging/logging/etc.
4) What happens when you scale beyond one machine? Which you hope to do if you're successful right? :-) Typically people dedicate a set of N machines to chat - then they hash based on a identifier for the user (think a GUID that represented the user - or a UserID/bigint depending on what corresponds to some internal authentication token that is consistent from login to login) which allows them to deterministically route the user's status/state information to a specific machine in that set of N boxes dedicated to messaging. So if a user that hashes to server N[2] needs to check if theri friends ar logged in it is easy to know for each of their friends exactly which machine their friend's status should be in because the backend consistently hashes those friends to the IM machine that corresponds to each userid hash. (i.e. you know just from the userid what server in the farm should be handling the IM status for that user.
Just dont' think you're gonna spin up a bunch of threads and that will save the day. It's sloppy and works only in very small numbers.
To make the matter worse you would also have to communicate between some arbitrary number of threads (it's a chat server, people want to talk to each other, not themselves.) I would suggest looking into UDP - can be done with a single thread on the server and fits the network activity well - people rarely write more then couple of sentences at a time in chat exchanges, which is very convenient for size-limited UDP datagrams.
There are other approaches of course, but one sure thing though is that you will never be able to do thread per socket at that scale.
I suggest you to read this great article on MSDN Magazine.
Describing:
Threaded Server
Select-Based Server
Asynchronous Server
codes in C# & VB.Net