Finding server(s) capacity - c#

I am working on a application stack as follows:
There is a cisco load balancer that can handle 40K concurrent users in front of 2 application server. In the back there is a sql server with 2 TB data.
What i want to find out is how much traffic this application can handle at peak times, in other words what is the capacity of this stack.
I am aware of JMeter. I dont think running jmeter from a single machine can find out the capacity of the application.
How would one determine capacity of this stack?

I dont think running jmeter from a single machine can find out the
capacity of the application.
JMeter has a distributed mode allowing you to simulate huge loads by using multiple client machines:
In the event that your JMeter client machine is unable,
performance-wise, to simulate enough users to stress your server, an
option exists to control multiple, remote JMeter engines from a single
JMeter GUI client. By running JMeter remotely, you can replicate a
test across many low-end computers and thus simulate a larger load on
the server. One instance of the JMeter GUI client can control any
number of remote JMeter instances, and collect all the data from
them.

Related

How to distribute the server Load

I'm trying to get my head around this...
I have an application composed of one Server(basically a N tier console application with TCP Async socket programming in C#), One MSSQL database and several clients.
Now the problem is thousands of clients is connecting to this server at the same time and server is not responding efficiently.I want to make this server as distributed and scalable to distribute the clients load.
I'm trying to figure out if there's solution to this problem.Any convenient solution is highly appreciable.
Thanks in advance...
You are a but slim on details, and this is not a drop-in solution: but I always steer clear of load balancers ( central point of failure, and you can only efficiently use the one datacenter or region ).
The client is the load balancer
instead, have a central endpoint to list the farmed servers, and then have the client randomly select a server to use for an amount of time. That selected server can then specialise on serving resources for that client, copying data from remote server(s) locally, if not already.
This selected server would be the master for that client, data would be created there and replicated to other servers later if the client changes server.
With such a distributed setup your servers can be deployed anywhere in the world generically. So your clients also get better latency and you can find the best priced hosting without being tied to a load balancer.
Container Clustering
You'll find clustering like coreos interesting but much more complicated to setup. But don't rule it out. For such a simple console application, it's not that hard to tweak your solution and control your own simpler scalable infrastructure with no extra layers of obscurity.
I would make the server-application cluster able, so that you can start instances multiple times on different virtural or physical computer. Than i would chose one single computer to be a load balancer. A good tool for load balancing in your case is HAproxy. This proxy will balance the load of work on different server. This will bring you the optimum of performance.

How to solve limitations of SignalR in scaleout for backplane

I use ASP.NET MVC and C# .I found SignalR for transfer data in real time,but signalR have some limits.
according to the issue for this :
Using a backplane, the maximum message throughput is lower than it is when clients talk directly to a single server node. That's because the backplane forwards every message to every node, so the backplane can become a bottleneck. Whether this limitation is a problem depends on the application. For example, here are some typical SignalR scenarios:
Server broadcast (e.g., stock ticker): Backplanes work well for this
scenario, because the server controls the rate at which messages are
sent.
Client-to-client (e.g., chat): In this scenario, the backplane might
be a bottleneck if the number of messages scales with the number of
clients; that is, if the rate of messages grows proportionally as
more clients join.
High-frequency realtime (e.g., real-time games): A backplane is not
recommended for this scenario.
My project needs to High-frequency realtime (e.g., real-time games) .
Also I need real time video chat
My scenario :
I have a Master server and multi Slave servers, Clients connect to the Slave servers and ans Slave servers connect to Master server.
Example :
Server Slave-1 and server Slave-2 connected to Master server, client-A and client-B connected to Slave-1 an client-C and client-D connected to Slave-2,
client-A send message or data or in live chat with client-D
How I can implement this scenario ?
[Update-1]
If i don't use signalR for that problem, So what should I use?
[Update-2]
In my scenario, the master server acts like a router and Slave server acts like a switch . Clients connected to switch and switch connected to router .if client-A send data packet to client-C, data packet should be send to router and router handle data packet.Over 2000 possible number of Slave servers and the number of users for each server it is over 10,000.
Thanks.
A backplane will introduce delays in message delivery, which will not work well for low-latency work. If you absolutely must have multiple servers to handle your clients, and you absolutely must have minimal latency, then a backplane is probably not going to work for you.
However, check out this conversation on the ASP forums. The poster is seeing average latencies of around 25ms for 60,000 messages per second to 3,000 connected clients on one server.
As is often the case, the trade-off here is between latency and complexity. The optimal solution is for messages to be routed only to the server(s) containing the target client(s). To achieve this you need a way to track every client connection, deal with reconnects to different servers, etc. You can probably solve this with a few tens of hours of hard slog programming, but in doing so you're going to break most of what makes SignalR useful.
For alternatives, the first that comes to mind is ZeroMQ. A bit more work, especially if your clients are browser based, but low latency and high throughput are project goals for ZeroMQ. You'll need to handle scale-out yourself though... and you're back to tracking connection points across multiple servers and reconnects.
If neither of these solves your problems, then you might have to look at changing your architecture. One common method for MMOs is to have related clients connect to the same servers to reduce inter-server communication requirements. Clients who legitimately need to communicate real-time data are put together on a single server which doesn't have to worry about back-plane issues. This server then communicates back to the 'Master' server only what is required to maintain world state and so on.
Plan your architecture to reduce the problems before they start... but don't spend weeks working on something that might not be necessary. Do some tests on SignalR and see what effect the backplane actually has on latency before you dive into the abyss.

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

Testing a C# sockets based multithreading server

We're developing a server application that uses sockets to handle client communication, obviously we want to test the server. Does anyone know of a good (open source) harness that can test the server using multiple threads on multiple client machines?
Quite frankly I would just roll your own and that's always what I do. A open source application would not be able to send packets according to your application level protocol as I'd advise to test at least authentication, some basic application level instructions, etc.
What I usually do is, write a stress tester application that spawns x amount (I usually test about 1000 clients per server) of clients and then just use a Timer (e.g. 25ms resolution) and on every timer callback, I randomly pick one of the connected clients and randomly execute either nothing, send data or disconnect the client. If I receive a disconnect notification for one of the clients, I spawn a new one to bring the total connected count back to x.
Then I let the whole thing run for a couple hours...

Categories

Resources