Multi-server n-tier synchronized timing and performance metrics? - c#

[I'm not sure whether to post this in stackoverflow or serverfault, but since this is a C# development project, I'll stick with stackoverflow...]
We've got a multi-tiered application that is exhibiting poor performance at unpredictable times of the day, and we're trying to track down the cause(s). It's particularly difficult to fix because we can't reproduce it on our development environment - it's a sporadic problem on our production servers only.
The architecture is as follows: Load balanced front end web servers (IIS) running an MVC application (C#). A home-grown service bus, implemented with MSMQ running in domain-integration mode. Five 'worker pool' servers, running our Windows Service, which responds to requests placed on the bus. Back end SQL Server 2012 database, mirrored and replicated.
All servers have high spec hardware, running Windows Server 2012, latest releases, latest windows update. Everything bang up to date.
When a user hits an action in the MVC app, the controller itself is very thin. Pretty much all it does is put a request message on the bus (sends an MSMQ message) and awaits the reply.
One of the servers in the worker pool picks up the message, works out what to do and then performs queries on the SQL Server back end and does other grunt work. The result is then placed back on the bus for the MVC app to pick back up using the Correlation ID.
It's a nice architecture to work with in respect to the simplicity of each individual component. As demand increases, we can simply add more servers to the worker pool and all is normally well. It also allows us to hot-swap code in the middle tier. Most of the time, the solution performs extremely well.
However, as stated we do have these moments where performance is a problem. It's proving difficult to track down at which point(s) in the architecture the bottleneck is.
What we have attempted to do is send a request down the bus and roundtrip it back to the MVC app with a whole suite of timings and metrics embedded in the message. At each stop on the route, a timestamp and other metrics are added to the message. Then when the MVC app receives the reply, we can screen dump the timestamps and metrics and try to determine which part of the process is causing the issue.
However, we soon realised that we cannot rely on the Windows time as an accurate measure, due to the fact that many of our processes are down to the 5-100ms level and a message can go through 5 servers (and back again). We cannot synchronize the time across the servers to that resolution. MS article: http://support.microsoft.com/kb/939322/en-us
To compound the problem, each time we send a request, we can't predict which particular worker pool server will handle the message.
What is the best way to get an accurate, coordinated and synchronized time that is accurate to the 5ms level? If we have to call out to an external (web)service at each step, this would add extra time to the process, and how can we guarantee that each call takes the same amount of time on each server? Even a small amount of latency in an external call on one server would skew the results and give us a false positive.
Hope I have explained our predicament and look forward to your help.
Update
I've just found this: http://www.pool.ntp.org/en/use.html, which might be promising. Perhaps a scheduled job every x hours to keep the time synchronised could get me to the sub 5 ms resolution I need. Comments or experience?
Update 2
FWIW, We've found the cause of the performance issue. It occurs when the software tests if a queue has been created before it opens it. So it was essentially looking up the queue twice, which is fairly expensive. So the issue has gone away.

What you should try is using the Performance Monitor that's part of Windows itself. What you can do is create a Data Collector Set on each of the servers and select the metrics you want to monitor. Something like Request Execution Time would be a good one to monitor for.
Here's a tutorial for Data Collector Sets: https://www.youtube.com/watch?v=591kfPROYbs
Hopefully this will give you a start on troubleshooting the problem.

Related

How to prevent NHibernate long-running process from locking up web site?

I have an NHibernate MVC application that is using ReadCommitted Isolation.
On the site, there is a certain process that the user could initiate, and depending on the input, may take several minutes. This is because the session is per request and is open that entire time.
But while that runs, no other user can access the site (they can try, but their request won't go through unless the long-running thing is finished)
What's more, I also have a need to have a console app that also performs this long running function while connecting to the same database. It is causing the same issue.
I'm not sure what part of my setup is wrong, any feedback would be appreciated.
NHibernate is set up with fluent configuration and StructureMap.
Isolation level is set as ReadCommitted.
The session factory lifecycle is HybridLifeCycle (which on the web should be Session per request, but on the win console app would be ThreadLocal)
It sounds like your requests are waiting on database locks. Your options are really:
Break the long running process into a series of smaller transactions.
Use ReadUncommitted isolation level most of the time (this is appropriate in a lot of use cases).
Judicious use of Snapshot isolation level (Assuming you're using MS-SQL 2005 or later).
(N.B. I'm assuming the long-running function does a lot of reads/writes and the requests being blocked are primarily doing reads.)
As has been suggested, breaking your process down into multiple smaller transactions will probably be the solution.
I would suggest looking at something like Rhino Service Bus or NServiceBus (my preference is Rhino Service Bus - I find it much simpler to work with personally). What that allows you to do is separate the functionality down into small chunks, but maintain the transactional nature. Essentially with a service bus, you send a message to initiate a piece of work, the piece of work will be enlisted in a distributed transaction along with receiving the message, so if something goes wrong, the message will not just disappear, leaving your system in a potentially inconsistent state.
Depending on what you need to do, you could send an initial message to start the processing, and then after each step, send a new message to initiate the next step. This can really help to break down the transactions into much smaller pieces of work (and simplify the code). The two service buses I mentioned (there is also Mass Transit), also have things like retries built in, and error handling, so that if something goes wrong, the message ends up in an error queue and you can investigate what went wrong, hopefully fix it, and reprocess the message, thus ensuring your system remains consistent.
Of course whether this is necessary depends on the requirements of your system :)
Another, but more complex solution would be:
You build a background robot application which runs on one of the machines
this background worker robot can be receive "worker jobs" (the one initiated by the user)
then, the robot processes the jobs step & step in the background
Pitfalls are:
- you have to programm this robot very stable
- you need to watch the robot somehow
Sure, this is involves more work - on the flip side you will have the option to integrate more job-types, enabling your system to process different things in the background.
I think the design of your application /SQL statements has a problem , unless you are facebook I dont think any process it should take all this time , it is better to review your design and check where is the bottleneck are, instead of trying to make this long running process continue .
also some times ORM is not good for every scenario , did you try to use SP ?

Need help with the architecture for a penny bidding website

I'm trying to create a website similar to BidCactus and LanceLivre.
The specific part I'm having trouble with is the seconds aspect of the timer.
When an auction starts, a timer of 15 seconds starts counting down, and every time a person bids, the timer is reset and the price of the item is increased by 0,01$.
I've tried using SignalR for this bit, and while it does work well during trials runs in the office, it's just not good enough for real world usage where seconds count. I would get HTTP 503 errors when too many users were bidding and idling on the site.
How can I make the timer on the clients end shows the correct remaining time?
Would HTTP GETting that information with AJAX every second allow me to properly display the missing time? That's a request each second!
And not only that, but when a user requests that GET, I calculate remaining seconds, but until the user see's that response, that time is no longer useful as a second or more might pass between processing and returning. Do you see my conundrum?
Any suggestions on how to approach this problem?
There are a couple problems with the solution you described:
It is extremely wasteful. There is already a fairly high accuracy clock built into every computer on the Internet.
The Internet always has latency. By the time the packet reaches the client, it will be old.
The Internet is a variable-latency network, so the time update packets you get could be as high or higher than one second behind for one packet, and as low as 20ms behind for another packet.
It takes complicated algorithms to deal with #2 and #3.
If you actually need second-level accuracy
There is existing Internet-standard software that solves it - the Network Time Protocol.
Use a real NTP client (not the one built into Windows - it only guarantees it will be accurate to within a couple seconds) to synchronize your server with national standard NTP servers, and build a real NTP client into your application. Sync the time on your server regularly, and sync the time on the client regularly (possibly each time they log in/connect? Maybe every hour?). Then simply use the system clock for time calculations.
Don't try to sync the client's system time - they may not have access to do so, and certainly not from the browser. Instead, you can get a reference time relative to the system time, and simply add the difference as an offset on client-side calculations.
If you don't actually need second-level accuracy
You might not really need to guarantee accuracy to within a second.
If you make this decision, you can simplify things a bit. Simply transmit a relative finish time to the client for each auction, rather than an absolute time. Re-request it on the client side every so often (e.g. every minute). Their global system time may be out of sync, but the second-hand on their clock should pretty accurately tick down seconds.
If you want to make this a little more slick, you could try to determine the (relative) latency for each call to the server. Keep track of how much time has passed between calls to the server, and the time-left value from the previous call. Compare them. Then, calculate whichever is smaller, and base your new time off that calculation.
I'd be careful when engineering such a solution, though. If you get the calculations wrong, or are dealing with inaccurate system clocks, you could break your whole syncing model, or unintentionally cause the client to prefer the higest latency call. Make sure you account for all cases if you write the "slick" version of this code :)
One way to get really good real-time communication is to open a connection from the browser to a special tcp/ip socket server that you write on the server. This is how a lot of chat packages on the web work.
Duplex sockets allow you to push data both directions. Because the connection is already open, you can send quite a bit of very fast data across.
In the past, you needed to use Adobe Flash to accomplish this. I'm not sure if browsers have advanced enough to handle this without a plugin (eg, websockets?)
Another approach worth looking at is long polling. In concept, a connection is made to the server that just doesn't die, and it gives you the opportunity on the server to trickle bits of realtime data down to the clients.
Just some pointers. I have written web software using JavaScript <-> Flash <-> Python/PHP, and was please with how it worked.
Good luck.

Silverlight/WCF application is suddenly transferring extremely slowly

I have a Silverlight 4 application that works a lot with a WCF service. The application has normally ran fine, with fast response times for even some hefty queries. Recently however, it's gotten quite slow, and I'm having a hard time troubleshooting why.
My database is hosted on a remote server. The application is hosted on the same server. Here's what I've noted:
When I run the application locally, using the ASP.NET as my server instead of IIS, and I hit the website via localhost, which hits the remote database, speeds are fast.
When I run the application locally, but use the remote WCF service rather than the local service, things are slow.
When I run the application over the web, (i.e. the remote application which is, again, on the same server as the database, so they're local to one another) the application is slow. This is pretty much what the production environment is...
When I log on to the server, and hit the the website from within the server, things are fast.
The queries to the database are fast. Manually running the queries on the database themselves, yields the results in a split second.
Using the WCFTestClient and hitting the remote WCF service is also really fast, and has virtually immediate turn around.
Lastly, when I'm using the expected setup of my local machine hitting the website over the web, which hits the database, etc:
Not all queries react the same way. Some of the heavier queries which result in large data sets actually have a quick response time. Some of the light queries - straight SELECT statements with no JOINS, that generate only a kilobyte of data, takes a lot longer...about 30 seconds. There are a few queries that are sometimes fast, sometimes slow, but the ones that are always slow are the worst.
About the server:
The server is a dedicated server, I've monitored the CPU and it's not being taxed by anything. I'm hosting with IIS 7, on Win Server 2K8, and Sql Server 2K8. The only thing that's changed in the past few weeks have been some Windows updates, and I've been told by one person that they made some Firewall changes - that's my current theory on the cause, but I don't know what else to try at this point, or how to show that it is the firewall..
Any thoughts?
It's hard to find out the reasons according to what you described, I think you should start to profile your application by logging the database time, WCF request processing time etc.
Once you get the data, you can find the real reason. This is what we have been doing on our products.
If I had to guess, you're experiencing a combination of network latency and a less-than-optimal database design. Your description of "small" queries taking longer than queries yielding large result sets is a classic indicator that you need to evaluate your query plans, and ensure that they are using the right indexes (you are using indexes, right?).
I suspect that sorting out your database issues will solve a great deal of the slowness you're experiencing; caching query results in memcached or something like it will solve most of the rest.
Generally, WCF is the last place I look for performance problems - every time I've gone that direction in the past, the trouble ended up being our code; WCF performs admirable well for its size.
I'm sorry that I can't be more specific, but performance questions are quite application-specific and we don't have much information here to go on.
Fiddler. Fiddler was the answer (as it usually turns out to be.)
If you've experienced similar issues, hopefully what I've learned can be of help.
Here's what I saw:
First, when using both the Chrome/IE Profiler, it became clear that the Request itself was causing the lag, while the Response was quite quick.
This lead me down two paths of possibilities: Either the server was causing lag in the requests due to some specific configuration that I wouldn't see when running via localhost, or there was something wrong with the request itself.
After using Fiddler to get a full view of the request, it became apparent that it was the request I was sending. One of the objects I was passing as a parameter to my WCF service had a property that, when serialized, amounted to about 1 megabyte's worth of data - and that was with gzip enabled. Initially this object was a rather small object, but as the application grew, so did this particular object, resulting in the sudden slow down.
The reason why it happened for certain calls and not others was purely determined by whichever call had this object as a parameter.
The reason why it happens when going over the web, vs. going through the localhost is that over the web, you inevitably face your provider's Upload limit, as well as a number of hops until you hit your server, vs. the direct connection from your localhost to your database.
The lesson: Always transmit the least amount of information you can get away with.

Design considerations for high-reliability service

I am writing a c# windows service which will perform some background processing - basically it is a consumer for a work queue.
It needs to not go down (stop processing new items), and if it does go down I need to be notified.
What are some design guidelines and considerations for a) ensuring that such a service is as reliable as possible, and b) sending out a notification if something does go wrong? I have considered, for instance, creating a watcher thread whose only job is to make sure the worker thread is still processing jobs.
There are a number of things that you can do here to help improve the reliability, as well as gauge that you have a solution that is going to meet your needs.
Testing
First and foremost though, the testing process that you go through will need to be a very solid one, test for those "unexpected" situations, loss of network connection, etc. Make sure that you are testing those, and seeing what is happening. Notification on failure, can be a bit of a "mixed bag". For example, you can't e-mail yourself if you don't have network connections available.
Proper Code Design
In addition to setting up valid test scenarios, be sure that your code is a bullet proof as possible, since you are creating a windows service, be sure that you are capturing, logging, and dealing with all errors possible, as if an error bubbles up to the OS, your service will go down.
Monitoring
Consider putting monitoring, in my day-job we have two types of monitoring used, errors are reported the the Windows Event log in some cases and Microsoft MOM is used to notify us of any/all issues that are going on in the environment. A second process that we use is a second scheduled job that every X minutes validates that the critical job is in a "Started" state, if it isn't in a started state, it will re-start it. Not elegant, but it works.
I think a MOM and/or Solar Winds or some other monitoring application which your system administrator might be using to monitor the machine on which the service is deployed & take proper action (send email, ring phones :)

.net remoting stops every 100 seconds

We have very strange problem, one of our applications is continually querying server by using .net remoting, and every 100 seconds the application stops querying for a short duration and then resumes the operation. The problem is on a client and not on the server because applications actually queries several servers in the same time and stops receiving data from all of them in the same time.
100 Seconds is a give away number as it's the default timeout for a webrequest in .Net.
I've seen in the past that the PSI (Project Server Interface within Microsoft Project) didn't override the timeout and so the default of 100 seconds was applied and would terminate anything talking to it for longer than that time.
Do you have access to all of the code and are you sure you have set timeouts where applicable so that any defaults are not being applied unbeknownst to you?
I've never seen that behavior before and unfortunately it's a vague enough scenario I think you're going to have a hard time finding someone on this board who's encountered the problem. It's likely specific to your application.
I think there are a few investigations you can do to help you narrow down the problem.
Determine whether it's the client or server that is actually stalling. If you have problems determining this, try installing a packet filter and monitor the traffic to see who sent the last data. You likely won't be able to read the binary data but at least you will get a sense of who is lagging behind.
Once you figure out whether it's the client or server causing the lag, attempt to debug into the application and get a breakpoint where the hang occurs. This should give you enough details to help track down the problem. Or at least ask a more defined question on SO.
How is the application coded to implement the continuous querying? Is it in a continuous loop? or a loop with a Thread.Sleep? or is it on a timer ?,
It would first be useful to determine if your system is executing this "trigger" in your code when you expect it to, or if it is, and the remoting server is not responding... so, ...
if you cannot reproduce this issue in a development environment where you can debug it, then, if you can, I suggest you add code to this Loop to write out to a log file (or some other persistence mechanism) each time it "should" be examining whatever conditions it uses to decide whether to query the remoting server or not, and then review those logs when the problem reoccurs...
If you can do the same in your remoting server, to record when the server receives a remoting request, this would help as well...
... and oh yes, just a thought, (I don;t know how you have coded this... ) but if you are using a separate thread in client to issue the remoting request, and the channel is being registered, and unregistered on that separate thread, make sure you are deconflicting the requests, cause you can't register the same port twice on the same machine at the same time...
(although this should probably have raised an exception in your client if this was the issue)

Categories

Resources