How to Process Multiple Requests at the Same Time With a Server - c#

Alright, so here's my thinking:
If two requests were made to one server at the same time, would one be denied, and if so, how could you keep something like that from happening?
I currently have a server set up for a chat application I'm making, and it basically starts a TCP/IP connection, waits for a client, reads the data sent from them, sends something back, disconnects, and repeats. That way, the server never stops running, and as many requests as you want could be made.
However, what if a client was starting up while another program was using the server. If one program was getting a file from the server while the other one was starting up, and the starting up one needed data from the server, but it was already busy, what would happen?
Would the startup wait until the server was available, or would it just go straight to an error (since no connection was available). If so, this could be really bad, since then the user wouldn't have all the data, like the list of his friends, or a few chats. How could you fix this?
My idea would be that you could have a while loop set up so that it keeps queuing the server until it gets a response. Is this the right way to go about this?

No, the client should assume that the server is always available. You'll see that the basic Socket.Listen(int32) method (which TcpListener acts as a wrapper for) takes one parameter, backlog:
Listen causes a connection-oriented Socket to listen for incoming connection attempts. The backlog parameter specifies the number of incoming connections that can be queued for acceptance. To determine the maximum number of connections you can specify, retrieve the MaxConnections value. Listen does not block.
Most server implementations start with something like the following:
socket.Listen(10); // You choose the number
while (true)
{
var client = socket.Accept();
// Spawn a thread (or Task if you prefer), and have that do all the work for the client
var thread = new Thread(() => DoStuff(client));
thread.Start();
}
With this implementation, there is always a thread listening for new connections. When a client connects, a new thread is created for it, so the processing for that client doesn't delay/prevent more connections from being accepted on the main thread.
Now, if a few new connections come in faster than the server can create new threads, then the new connections will be put in the backlog automatically (I think at the OS level) - the backlog parameter determines how many pending connections can be in the backlog at once.
What if the backlog fills up completely? At this point, you need a second server, and a load balancer to act as a middle-man, directing half the requests to the first server, and half the requests to the second server.

Logic here is simple.
Server starts listening and begins accept clients.
Client tries to connect to the server. You can simply do nothing if server isn't running or you can implement some reconnect logic notifying client about unsuccessful connection attempts.
Lets assume that server was running when client tried to connect. You are talking about multi client application. You shouldn't disconnect the client. You should add connected client to the list of connected clients.
Then, when some of the clients connects to the server and there are some other already connected clients and sends some message. You receive that message via client socket and then you broadcast the message to all other connected clients except one that is sending the data.

Related

How to manage large number of TCP connections using ASP.NET and C#

I have an application which connects to a third party server let’s call it Server-A. I have been given four different ports i.e.
4000, 40001, 40002, 40003. On each port I can create 20 connections so I can create 80 total connections with server-A. I want to create a service layer that should communicate with server-A on mentioned ports. The technology will be asp.net C#.
The problem statement
1- Application should be non-blocking/asynchronous to entertain 10 to 20 million request per day
2- Whenever the service layer starts it create 20 connections on each port. (Total 80 connections)
2- All connections should remain connected/alive 24/7 and reconnect whenever any connections drops/disconnects. It will send a heartbeat message in idle time.
My Questions
How can I manage these connection? Should I add those to a static list one by one when a TCP socket is successful?
How can I know that a certain connection is dropped/disconnected?
How can I send certain requests on different ports? Let’s say if a>b send it on port 4000 else if a<=b send it on 4001
How can I make it asynchronous?
For an initial start I created a single TCP connection on single port and it works as expected. Then I replicated the same code for other port, but I know it is very bad approach and I have to copy same code 80 times to make 80 connections. I want a clean and scalable way to achieve it, so that in future may be I increase the connection to 100 or more.
Is there any framework which I can use?
Any help would be greatly appraised.
#Kartoos Khan, i have made some services with those requirements and using asynchronous methods is the best way to create high performance services in C#, because:
It does not block IO peripherals, as can be sockets.
Minimize the threads and improve the performance to it.
Let me recommend you the book Writing High-Performance .NET Code. The chapter 4, Asynchronous Programming has the information that you need to know to improve the performance.
In my experience those are my recommendations:
Create a main threat to handle the main program.
Create a class to handle the Socket Server, which implements an asynchronous process to accept connections, using the methods BeginAccept and EndAccept, here is a sample of how to use it.
Create another class to handled the socket connections, which has as a property the Socket object.
2.1 create a method to start the Reading process, which will be called by the Server class to start the communication between the endpoints. This methos will start the process of read in an asynchronous way.
2.2 To read and write in an asyncrhonous way, it is necessary to get the NetworkStream from the socket, and use the methods BeginRead and EndRead, to receive data, and BegineWrite and EndWrite to send data. Here there is the documentation.
In the case that your service only needs to connect to a Server, ignore the step 1and implement the Client class to start the connection to an specific EndPoint.
Use a collection class, as can be a Dictionary, Key-Value-Pair collection, to store each Client Class and use the socket ID as the key to access to each Client Class.
Due each Client Socket handles it own socket, i use to implements a way to reconnect at the same Client Socket, i this way each Client is responsable for itself.
The main program will be responsable to create each Client Server and set the EndPoint of each client, as you need, and start to connect each of them. In this case, TCPClient allow you begin an asynchronous process for connect, using the methods BeginConnect and EndConnect.
Here you can see more details about this issue.
I hope this might be useful for you.
To handle such a large volume of traffic you need to do a few things.
Assumptions
You are connecting to another client’s server.
You have a large volume of web traffic from either multiple machines or from multiple working processes on any given machine.
You know how to create TCP client server objects and handle the connections.
For less than 80 worker threads across your servers:
Because each thread processes synchronously, you only need to use a single connection for each thread.
If no single web server is running more than 20 worker processes, then you can designate a single port for each server to use. Stick the port in your web.config file as a variable and use that when creating connections. You will never hit the limit.
Store your connection in a shared object that the entire app can use (could put this in your BLL layer) and if you have a connection error, re-create a new connection on that thread.
For more than 80 worker threads across your servers:
Do the same as the last step but at this point you either need to negotiate for more connections or you will add a new layer in between your application and the server you wish to reach.
This second layer acts as a broker for the two sides and can manage a pool of connections instead and draws off a connection each time you need to access Server-A and puts it back into the pool when finished.
Anytime you connect to the broker application, spawn a new thread to do the processing until the connection is dropped or closed.
Keep track of your open connections and viola you can have as many clients as you need but your bottleneck will be those 80 connections out even if you have hundreds or thousands in.

c# sockets check for incoming connections and accept only one connection at a time

Let's assume that I have a basic C# socket server.
TcpListener listener = new TcpListener(8888);
listener.Start();
Socket clientSocket = listener.AcceptSocket();
This would accept the first connection that's coming from a client.
However, say we have 10 incoming connections. I do not wish to accept all 10 connections, I want to have a list of all the pending connections and choose one connection that I will accept and exchange data with - we can assume that I can identify them by their IP addresses. After the exchange is done, I will close the socket and pick a different connection to exchange data with.
I don't need to have more than 1 active connection since I'll be serving one client at a time.
Is such a feat possible? Or do I have to accept all of them and thread my way through them?
EDIT: Client timeout is not a problem. The clients are programmed to periodically retry their connection in case of a timeout.
EDIT2:
I have managed to find a similar question after quite a bit of searching. It seems like you cannot differentiate the clients until you accept their connection. Here's the question:
How to reject a connection attempt in c#?
I thought I could at least tell the IP address from which the connection attempts are coming from, but it seems like it's not possible until I connect to the clients. Looks like I'll be threading my way through the clients after all.
You do not accept all connections, only one. You can just perform AcceptSocket, handle your client, and then disconnect and accept another one, e.g. in a while(listener.Pending()) loop.

c# and networking - Client listening and server send and most efficient C# socket networking?

I'm working on a game that depends on the standard System.Net.Sockets library for networking. What's the most efficient and standardized "system" I should use? Should the client send data requests every set amount of seconds, when a certain event happens? My other question, is a port forward required for a client to listen and receive data? How is this done, is there another socket created specifically for listening only on the client? How can I send messages and listen on the same socket on the client? I'm having a difficult time grasping the concept of networking, I started messing with it two days ago.
Should the client send data requests every set amount of seconds, when a certain event happens?
No. Send your message as soon as you can. The socket stack has algorithms that determine when data is actually sent. For instance the Nagle algorithm.
However, if you send a LOT of messages it can be beneficial to enqueue everything in the same socket method call. However, you need to send several thousand of messages per client and second for that to give you any benefit.
My other question, is a port forward required for a client to listen and receive data?
No. Once a socket connection have been established it's bidirectional. i.e. both end points and send and receive information without screwing something up for the other end point.
But to achieve that you typically have to use asynchronous operations so that you can keep receiving all the time.
How is this done, is there another socket created specifically for listening only on the client?
The server has a dedicated socket (a listener) which only purpose is to accept client sockets. When the listener have accepted a new connection from a remote end point you get a new socket object which represents the connection to the newly connected endpoint.
How can I send messages and listen on the same socket on the client?
The easiest way is to use asynchronous receives and blocking sends.
If you do not want to take care of everything by yourself, you can try my Apache licensed library http://sharpmessaging.net.
Creating a stable, high quality server will require you to have a wealth of knowledge on networking and managing your objects.
I highly recommend you start with something smaller before attempting to create your own server from scratch, or at the very least play around with a server for a different game that's already made, attempt to improve upon it or add new features.
That being said, there are a few ways you can setup the server, if you plan on having more than a couple of clients you don't generally want them to all send data whenever they feel like it as this can bog down the server, you want to structure it in such a way that the client sends as little data as possible on a scheduled basis and the server can request more when its ready. How that's setup and structured is up to you.
A server generally has to have a port forwarded on the router in order for requests to make it to the server from the internet, and here is why. When your computer makes a connection to a website (stackoverflow for example) it sends out a request on a random port, the router remembers the port that you sent out on and remembers who sent it (you), when the server sends the information you requested back the router knows you wanted that data and sends it back to you, in the case of RUNNING a server there is no outbound request to a client (Jack for example), so the router doesnt know where jacks request is supposed to go. By adding a port forwarding rule in the router your saying that all information passed to port 25565 (for example) is supposed to go to your server.
Clients generally do not need to forward ports because they are only making outbound requests and receiving data.
Server Starts, server starts listening on port 25565
Client starts, client connects to server on port 25565 and initiates a connection
Server responds to client on whatever port the client used to connect (this is done behind the scenes in sockets)
Communication continues from here.

Thread Management in C# Socket Programming

I have got an TCP Socket Application that client side sending huge string messages to server at same time.And server getting this messages writing them into Access DB.So if there are so many clients server side can not handle each client properly and sometime server closing itself.
Is there any way tell client's thread before send message wait for queue if there is another client currently in queue? With this server don't need to be handle for example 30 client's demand at same time.
For example;
Client sends message => Server processing 1 client's demand
Client is waiting for 1 client's demands for complete than 2 Client sends message => Server processing 1 client's demand
Client is waiting for 2 client's demands for complete.
My problem is appears when I use access db. While opening access connection saving data into tables and closing db is taking time and server go haywire :) If I don't use access db I can get huge messages with no problem.
Yes, you can do that however that s not the most efficient way of doing it. Your scheme is single threaded.
What you want to do is create a threadpool and accept messages from multiple clients and process them as seperate threads.
If that s too complicated. You can have a producer consumer queue within your server, all incoming messages will be stored in a queue while your server will be processing them first come first serve basis.
I think you should consider using a web server for your application, and replacing your protocol with HTTP. Instead of sending huge strings on a TCP stream, just POST the string to the server using your favorite HttpClient class.
By moving to HTTP you more or less solve all your performance issues. The web server already knows how to handle multiple long requests so you don't need to worry about that. Since you're sending big strings, the HTTP overhead is not going to affect your performance.

Determine if server is listening when using udp

The setting: I want to write a point-to-point Connection class that, when used, does not differentiate between server and client. The first host which calls connect() shall become the server waiting for the client to connect and the second shall become the client that connects to the server.
In order to do that the connect() method first needs to check for a listening server. a) The first time this happens no server is found and the party calling connect() starts listening on localhost and the configured port for an incoming connection. b) The second party calling connect() also checks the remote host on the given port, recognizes the server and connects to it.
This is not too hard using TCP since TcpClient.Connect() throws an exception when no connection could be established. Therefore I know when I'm the first. Since I use reliable LAN only, I wanted to use UDP, however.
My problem: How can I determine whether an UDP server socket is waiting for incoming data.
Ideally I would like to use the asynchronous network API directly afterwards. Instead of dealing with listening threads all by myself.
With UDP, the communication model is akin to a message in a bottle: you know you sent it, but there's no way to know if anyone ever received it.
You need to manually establish a communication protocol to determine if the remote party is listening (e.g. have them send a "Yes, I 'm here" response). This would require both endpoints to accept UDP datagrams.
As Jon and Andrew said you can't see if a listener is open, but you can implement a ping/pong protocol. Send a ping first time you connect if no pong back then set this up like a server.
If you got pong back then that's your server.
I don't think you can check for a listening server, short of sending a packet and waiting to see if you get a reply.

Categories

Resources