How to distribute the server Load - c#

I'm trying to get my head around this...
I have an application composed of one Server(basically a N tier console application with TCP Async socket programming in C#), One MSSQL database and several clients.
Now the problem is thousands of clients is connecting to this server at the same time and server is not responding efficiently.I want to make this server as distributed and scalable to distribute the clients load.
I'm trying to figure out if there's solution to this problem.Any convenient solution is highly appreciable.
Thanks in advance...

You are a but slim on details, and this is not a drop-in solution: but I always steer clear of load balancers ( central point of failure, and you can only efficiently use the one datacenter or region ).
The client is the load balancer
instead, have a central endpoint to list the farmed servers, and then have the client randomly select a server to use for an amount of time. That selected server can then specialise on serving resources for that client, copying data from remote server(s) locally, if not already.
This selected server would be the master for that client, data would be created there and replicated to other servers later if the client changes server.
With such a distributed setup your servers can be deployed anywhere in the world generically. So your clients also get better latency and you can find the best priced hosting without being tied to a load balancer.
Container Clustering
You'll find clustering like coreos interesting but much more complicated to setup. But don't rule it out. For such a simple console application, it's not that hard to tweak your solution and control your own simpler scalable infrastructure with no extra layers of obscurity.

I would make the server-application cluster able, so that you can start instances multiple times on different virtural or physical computer. Than i would chose one single computer to be a load balancer. A good tool for load balancing in your case is HAproxy. This proxy will balance the load of work on different server. This will bring you the optimum of performance.

Related

better way to create game server with web interface

i decided to write a card game with c# that has a winform application for main server to manage the game with a web interface. i chose SignalR self-host for main server. because i want to sell this app to others and dont want to modify the code or html of the web interface. so my question is: is that good for handling 10000 client request? is there a way to write this app for better performance?
another thing is i want to write the main server and login cashout profile and ... most written by customers like poker mavens and i just create a api with json to do this functions. plzzzz guide me which way is better to write this app!
With your server code self hosted and a javascript client calling into your server methods, becoming your browser based client, your design should work.
I am looking at this. https://learn.microsoft.com/en-us/aspnet/signalr/overview/deployment/tutorial-signalr-self-host
But I think you'll need to figure out scale out scenarios and server failure scenarios with the self host. In case there is a patch update on the server and it has to restart, you'll need to be able to get a backup. Also consider the case when you need to upgrade the server. So you'll need to be able to host it in multiple servers and you'll need to provide the signalr backplane option.
From a performance point of view, I have tested a web api signalR application on a single 4-core-14-GB server and was able to scale up to 20k connections, with the server comfortably serving more than 200 Requests per second.
With a backplane these numbers were around 100-150 rps.
The response times in both cases were very good ~ 500 ms.
Although please note that your numbers could be VASTLY different based on your actual functionality.

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

Communication between two separate applications

I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application.
I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command.
I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped?
The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc).
Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session).
Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes?
Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now.
My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change.
BTW, have you considered using Query Notifications to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use netTcpBinding for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
I'd use standard TCP sockets - this will survive all sorts of moving of components, and minimize configuration issues IMHO.

High availability

Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster.
Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing.
You need to use a layer 4 load balancer in front of the two endpoints. Prob best to stick with a dedicated piece of hardware.
Without trying to sound too vague but I think Windows Network Load Balancing (NLB) should handle this for you.
Haven't done it yet with WCF but plan to have a local DNS entry pointing to our Network Load Balancing (NLB) virtual iP address which will direct all traffic to one of our servers hosting services within IIS. I have used NLB for this exact scenario in the past for web sites and see no reason why it will not work well with WCF.
The beauty of it is that you can take servers in and out of the virtual cluster at will and NLB takes care of all the ugly re-directing to an available node. It also comes with a great price tag: $FREE with your Windows Server license.
We've had good luck with BigIP as a solution, though it's not cheap or easy to set up.
One nice feature is it allows you to set up your SSL certificate (and backdoor to the CA) at the load balancer's common endpoint. Then you can use protocols to transfer the requests back to the WCF servers so the entire transmission is encrypted.

Categories

Resources