I'm working on a server project (and I want to make a MMO Server with this). I have created something but IDK is it a good system. Namely is there a MMO server tutorial/book for creating (help!) a high performance socket server (MMO)? I'll send/receive 5kb data to every connected clients (because this is a MMO systems) and server must handle ~2000clients/s. Can anybody show me a good start point?
There: C# SocketAsyncEventArgs High Performance Socket Code; based on things I have learnt from this (and some other resources) I have written a high performance TCP server which is handling more than 7000 clients.
Edit: Other good .NET code bases I studied to some extend are fracture (F#), SocketAwaitable and SuperSocket. I especially like fracture because of it's simple (not naive) and smart buffer pool handling but (as the version I've worked with) it does not provide a separate pool for acceptors; which I've done myself easily based on the already provided pool.
Related
Requirements:
The need is for a windows service based C# .NET 4.5 always (at least long) connected TCP Server architecture with vertical and horizontal scaling and each server may handle max possible connections. Clients can be any IoT (internet of things).
I am aware of the limitations on ports but still wonder why these limitations in this era of tech (we always have limits but why still the old ones?!). Also temporary tcp/http connections will scale fine but not a requirement here.
Design:
Single thread per server for async-accept new connections (lifetime of server).
code: rawTcpClient = await tcpListener.AcceptTcpClientAsync();
One thread per client connection (loop) to hold client connection ?
(see my Q below)
a Task for performing client operation (short term, intermittent
operations)
my Question on optimization (if Possible?):
How can I optimize/manage to hold all the client connections in a set of threads/threadpool instead of one thread per connection since this is client-lifetime which may be for a long duration?
Ex: per server, only 50 threads based tasks allocated to hold connected clients so that they don't get disconnected, while waiting for client data?
Efficient & Scalable
The very first thing you need to decide is how efficient you want to be. The socket APIs can get extremely complex if efficiency is your top priority. However, efficiency is almost never the top priority, even though a lot of people think it is. The problem is that complexity can increase exponentially with efficiency/scalability, and if you simply maximize efficiency/scalability, you'll end up with an almost unmaintainable system. So you'll need to decide where to draw the line on that scale.
Particularly if you have horizontal scaling, you probably don't need to use the extreme-efficiency socket APIs.
I am aware of the limitations on ports but still wonder why these limitations in this era of tech (we always have limits but why still the old ones?!).
Compatibility. Ports in particular are represented by a 16-bit value. The only way this would change is if a new standard came out, and everything upgraded. NICs, gateways, ISPs, and IoT devices. That's a high order, and will probably never happen.
Single thread per server for async-accept new connections (lifetime of server).
That's fine. If you have a large amount of connection turnover, you can have multiple accept threads, too. Just keep your backlog high (it should be high by default on Windows Server OSes).
One thread per client connection (loop) to hold client connection?
Er, no.
You'll want to use asynchronous I/O, for sure.
You should have continuous (asynchronous) reads going on all connected clients, and then do (asynchronous) writes as necessary. Also, if the protocol permits it, you should periodically write heartbeat messages to each connected client; otherwise, you'll need a timer for each client to drop the connection. Depending on the nature of your writes, you may need to have a queue of pending writes per client.
a Task for performing client operation (short term, intermittent
operations)
If you use asynchronous tasks, then all your actual code will just run on whatever threadpool thread is available. No need for dedicated tasks at all.
You may find my TCP/IP .NET Sockets FAQ helpful.
I'm an amateur programmer working on a pet project of mine and I would like some pointers on how to make a C# server application. Here's the general idea:
A client connects to the server application, which in turn fetches the necessary information from a mysql database and sends it back to the client to be displayed and wait for the next action.
I got the idea of making something like this after seeing a somewhat old IBM AS400 mainframe running a warehouse management system, and I though: "Hey, I could try developing a small version of this with a nice UI that doesn't look like it stepped out of a time machine!"
I searched around and used the tcplistener class to communicate between the server and client and managed to send some calls and responses using one thread per client. However I've read that this is not scalable for a large number of clients...
Am I looking at this problem the wrong way and I should try something else? Any input will be appreciated
You don't need to deal with TCP directly for this - WCF (Windows Communication Foundation) was written to abstract all the low level stuff from you.
Check this link out for a good example of how to create a client/server application, it has an entry level explanation, some code and a downloadable project...
http://www.codeproject.com/Articles/16765/WCF-Windows-Communication-Foundation-Example
You can find plenty of information about WCF elsewhere on the internet and here on SO.
It is a very large topic, but the situation you have described is pretty simple - so I doubt you will have problems following the example.
I'm Currenty writing a server architecture and came across this as well: faily easy to solve.
You're right, using 1 thread per client is not effiecient and is a huge waste or resources! The way around that is Thread Pooling. There are loads of different ways to do this but the way I chose to implement it on my server was to add every connection to a queue and then have x number of threads (which you can easily increase or decrease to handle demand) simply dequeue a connection, process it, then enqueue it again.
Of course using WCF will make life easier for you and will speed things up drastically but where's the challange in that!
We are currently investigating the most efficient way of communicating between 120-140 embedded hardware devices running on the .NET Micro framework and a server.
Each embedded device needs to send to, and request information from the server on a fairly regular basis all in real time through TCP.
My question is this: Would it be better to initialise 140 TCP connections to the server, and then hang on to these connections, or initialise a new connection for each requests to and from the devices? Would holding on to and managing 140 TCP connections put a lot of strain on the server?
When the server detects new data in the database it needs to send this new info to 1..* devices (information is targeted to specific devices), if I held on to the 140 connections I would need to do a lookup for the correct connection each time I needed to send information instead of just sending to an IP:PORT associated with the new data.
I guess another possibly stupid question would be is it actually possibly to hang on to 140 TCP connections on a single port?
Any suggestions/comments are appreciated!
In general you are better maintaining the connections for as long as possible. If you have each device opening a connection each time it sends a message you can end up effectively DoS'ing the server as it ends up with lots of sockets in the TIME_WAIT state taking up space in it's tables.
I worked on a system where there were a bunch of clients talking to a server and while they could be turned on and off regularly, it was still better to maintain the connection (and re-establish it when it had dropped and a new message needed to be sent). You may end up needing to write slightly more complex code, but I've found it to be well worth the effort for the reduced load on the server.
Modern operating systems may have bigger buffers than the ones I actually encountered the DoS effect on, but it's fundamentally not the best idea to be using lots of connections like that.
Things can get relatively complicated on the client side, especially when the device tends to go to sleep transparently to the application because that means connections will time out while the app thinks they are still open. When we did this we ended up with relatively complex network code because we needed to deal with the fact that the sockets could (and would) fail as a matter of course and we simply needed to setup a new connection and re-attempt sending the message. You just tuck this code away into your libraries and forget about it once it's done though.
In actual fact in practice our initial application had even more complex code because it was dealing with a network library that was semi-aware of the stop start nature of the devices and tried to resend failed messages, sometimes meaning that the same message got sent twice. We ended up doing an extra layer of communication on top in order to ensure duplicates got rejected. If you're using C# or regular BSD style sockets you shouldn't have that problem though I'm guessing. This was a proprietary library that managed the reconnects but caused headaches with the resends and it's inappropriate default time-outs.
You usually can connect much more than 140 "clients" to a server (that is with decent network / HW / RAM)...
I recommend always to test this sort of thing with real scenarios (load etc.) to decide since there are aspects like network (performance, stability...), HW (server RAM etc.) and SW (what does the server exactly do?) that can only be checked by you.
Depending on the protocol you could/should even put some timeout/reconnect mechanism in there.
The lookup you mean would be really fast - just use ConcurrentDictionary to hold the needed information with IP:PORT as the key (assuming the server runs on a full .NET 4).
For some references see:
http://msdn.microsoft.com/en-us/library/dd287191.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/17/c.net-little-wonders-the-concurrentdictionary.aspx
EDIT - as per comments:
Holding on to a TCP/IP connection doesn't take much processing client-side... it costs a bit of memory. I would recommend to do a small test (1-2 clients) to check this assumption for your specific case.
If you are talking about a system with hardware devices then I suggest to go with closing the connection every time the client finishes sending data.
To make sure the client gets some update from the server, the client can wait for a 5 second period for any data to arrive from the server. If the data is received within/before this timeframe, then close the connection and process the data. If not, close the connection and wait after sending next set of data.
This way scaling becomes much easier. Keeping the connections open always leads to strain on the resources and in my opinion is not necessary unless it is some life-saving device like heart rate monitor, oxygen supply monitor etc.,
I am attempting to send player information from my Game to my network client to then be sent off to the server.
Currently the ClientNetwork -> ClientGame relationship is held with XML files. They read/write back and forth at very high speeds. If you use just one XML file for this trade, one will "hog" the file at times, making a kind of lag when one cannot read because the other is viciously writing and rewriting.
To fix this I have 2 of each of my XML files. If it cannot read one, it will read the other. In theory they should be using both of them, since it'd be a tradeoff from one to another. Not working up to par.
But my main problem is just the usage in general of XML is very sloppy, dozens of try-catch statements to make sure they're all happy (and my personal favorite, try catches within try catches -- WE HAVE TO GO DEEPER)
I am just curious of if there is a better way to be doing this. I need a static point of variables that can be accessed by both client side programs. I'm afraid someone is going to say databases...
I'd like to state for anyone who is looking into this as well and stumbled across this page that Shared Memory is awesome. Though I have to convert all strings to characters and then to bytes and read them one by one, in the whole it's ALOT better than dealing with things that cannot read/write the same file at the same time. If you wish to further understand it rather than just use it, go to this link, it explains a lot of the messaging varieties and how to use them.
Yes there is!
The term you are looking for is interprocess communication - communication between two processes on the same machine.
There are various methods which allow two processes on the same machine to communicate with each other, including:
Named pipes
Shared memory
Sockets
HTTP
Fortunately C# applications can simply use the WCF framework to perform IPC (interprocess communication) using one of the above, and let the WCF framework take care of the difficult bits! Here are a couple of guides to get you started (there are many more):
WCF Tutorial - Basic Interprocess Communication
Many to One Local IPC using WCF and NetNamedPipeBinding
Also, one of the neat things about WCF is that you can also use it to communicate between different machines simply by changing the "Transport" (i.e. the communication method) to one which works over a network, (e.g. HTTP).
If you are targetting .Net 2.0 then you should look into either .Net remoting or web services instead.
A simple TCP stream jumps out at me. Have the network client open a listening TCP socket, and have the game connect to the network client. You could continue to send the same XML data you're already writing, if you like.
I agree with the tcp/ip socket answer proposed by David. I would simply submit the data to a socket on the local pc and have the other application listen to the socket. You can transmit data easily and quickly using this method and it will work no matter what version of the .net framework you are targeting.
I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.