SETUP:
We have a .Net application that is distributed over 6 local servers each with a local database(ORACLE), 1 main server and 1 load balance machine. Requests come to the load balancer which redirects the incoming requests to one of the 6 local servers. In certain time intervals data is gathered in the main server and redistributed to the 6 local servers to be able to make decisions with the complete data.
Each local server has a cache component that caches the incoming requests based on different parameters (Location, incoming parameters, etc). With each request a local server decides whether to go to the database (ORACLE) or get the response from the cache. However in both cases the local server has to goto the database to do 1 insert and 1 update per request.
PROBLEM:
On a peak day each local server receives 2000 requests per second and system starts slowing down (CPU: 90% ). I am trying to increase the capacity before adding another local server to the mix. After running some benchmarks the bottleneck as it always is, seems to be the inevitable 1 insert and 1 update per request to database.
TRIED METHODS
To be able decrease the frequency I have created a Windows service that sits between the DB and .NET application. It contains a pipe server and receives each insert and update from the main .NET application and saves them in a Hashtable. The new service then at certain time intervals goes to the database once to do batch inserts and updates. The point was to go to the database less frequently. Although this had a positive effect it didn't benefit to the system load as much as I expected. The most of the cpu load comes from oracle.exe as requests per second increase.
I am trying to avoid going to the database as much as I can and the only way to avoid DB seems to be increasing the cache hit ratio other than the above mentioned solution I tried. My cache hit ratio is around 81 % percent currently. Because each local machine has its own cache I am actually missing lots of cacheable requests. When two similar requests redirects to different servers the second request cannot benefit from the cached result of the first one.
I don't have a lot of experience in system architecture so I would appreciate any help to this problem. Any suggestions on different caching architectures or setup, or any tools are welcome.
Thank you in advance, hopefully I made my question clear.
For me this looks like a application for a timesten solution. In that case you can eliminate the local databases and return to just one. Where you now have the local oracle databases, you can implement a cache grid. Most likely this is going to be a AWT (Async, Write Through) cache. See Oracle In-Memory Database Cache Concepts
It's not a cheap option but if could be worth investigating.
You can keep concentrating on the business logic and have no worries about speed. This of course only works good, if the aplication code is already tuned and the sql is performant and scalable. The SQL has to be prepared (using bind variables) to have the best performance.
Your application connects to the cache and no longer to the database. You create the cache tables in the cache group for which you want to have caching. All tables in a SQL should be cached, otherwise, the complete SQL is passed through to the Oracle database. In the grid a cache fusion mechanism is in place so you have no worries about where the data in your grid is located.
In the current release support for .net is included.
The data is consistent and asynchronously updated to the Oracle database. If the data that is needed is in the cache and you take the Oracle database down, the app can keep running. As soon as the database is back again, the synchronization pick up again. Very powerful.
2000 requests per second per server, about 24000 rps to database. It's a HUGE load for DB.
Try to optimize, scaleup or clusterize database.
May be NoSQL DB (Redis\Raven\Mongo) as middleware will be suitable for you. Local server will read\write sharded NoSQL DB, aggregated data will by synchronized with Oracle off-peak times.
I know the question is old now, but I wanted let everyone know how we solved our issue.
After trying many optimizations it turned out that all we needed was Solid State Drives for the 6 local machines. The CPU dropped down to 30% percent immediately after we installed them. This is the first time that I've seen any kind of hardware update contributes this much to performance.
If you have high load setup, before making any software or architecture changes try upgrading to a SSD.
Thanks everyone for your answers.
Related
I've a system where Data is being inserted through SP that's called via WCF Service.
In system, we have currently 12000+ actively logged in Users who will be calling WCF service at every 30 seconds (effectively min 200 requests per second).
On SQL Server side, CPU Usage shoots to 100% and when I examined, > 90% of time was spent in DB Writes. This affects overall server performance.
I need suggestion to resolve this issue so that we have less DB write operations and more CPU remains free.
Am open to integrate any other DB Server, use Entity Framework or any other ORM combination if needed. I need to have solution to handle this issue.
Other information that might be helpful:
Table has no indexes defined
Database has growth factor set to 200MB.
SQL Server Version is 2012.
SImple solution: back the writes. Do not call into the sql server for every insert.
Make a service that collects them and calls them more coarsely. The main problem is that transaction handling is a little heavy cost wise - in cases like that it may make sense to batch them.
Do not call a SP for every row, load them into a temp table and then process them in bulk (or use a table variable to provide the sp with multiple lines of information at once).
This gets rid of a lot of issues, including a ton of commits (you basically ask for like 200 TPS which is quite heavy and not needed here).
How you do that is up to you - but for something that heavy I would stay away from an ORM (Entity Framework is hilarious in not batching anything - that should be tons of sp calls) and use handcrafted sql at least for this part. I love ORM's but it is always nice to have a high performance hand crafted approach when needed.
All,
Need some info.
We have stores at multiple locations and use client server app installed for sales activity.
sales data is stored in database which is setup in all stores...
# end of day - a batch pulls data from all of the store locations and update main warehouse database.
We want to have real time implementation so that whenever there is transcation # any store... data will update immediately to main warehouse repository.
Any clue as how can we achive real time update of data to main warehouse ?
Thanks in advance...
One approach to this is called replication. There are several ways to do it in SQL Server. You're probably looking for transaction replication or merge replication.
Here's a place to start in the SQL Server 2012 documentation.
And here's a fairly recent overview that might be helpful.
You should make sure you understand what "real time" means, and how real time you really need to be. If you are not pre aggregating data and then storing it in the WH, then you should be able to set up replication between the database servers (if they can talk to each other). If you are loading an aggregate, then it gets tricky because you have to merge the measures (facts) into the warehouses existing measures, which is tough. If you don't need true real time, just a slow trickle, then consider simply running your current process on a schedule in sql agent.
First off - why not run the batch multiple times a day. It would not really be "real-time" but might yield good enough real world results.
One option would be to implement master-master replication provided by the SQL engine in use. Though this probably means that some steps need to be taken to guard against duplicate IDs, auto increment mismatch etc. For example we have a master-master system set up so that one produces entries with odd IDs, the other with even.
Another approach could be that all reads are performed against local databases, and all writes are performed into the single remote master. Data would be replicated as a master-slave setup. This would provide best data consistency, but slow network would make any writes slow. We have this kind of a setup implemented atop of the master-master replication as most interaction are reads.
One real world use case I have actually come across for a similar stores/warehouse setup was based on Firebird SQL. Every single table had triggers implemented to store every action on local databases in so called log tables. And there was a replication application running at all times, regularly checking these log tables, updating the data to a remote database and pulling in new data from the remote (which had it's own log tables). But as a downside it was a horror to maintain as triggers needed to be updated when something changed in the database setup and the replication application would fail/hang at times. But data consistency was maintained well and resolved by negative IDs being used for local database and positive for master/remote. But in the end it did not really provide real "real-time".
In the end - there is no one-shoe-fits-all answer and books could probably be written on the topic. Research and Google are your friends.
I have noticed that our web application queries a particular table an enormous amount of times. The table is relatively small, with only about a hundred rows that are used.
I'm wondering if there is a way to store this table once every 15 minutes or so in memory in the website application, so the system doesn't have to make so many queries to get the same information over and over again. This would be available across many different users.
The table is the Client table, so users login from many different clients. The data is pretty static, probably getting updated perhaps once a day.
Updates: SQL profiler is showing the query is run quite a bit, so that's what concerns me. The website is not notably slow. I just thought this could help make it even faster.
If the table is small and frequently queried, there is an outstanding chance that the data and any indices is entirely in SQL Server's memory, the query plan is cached, and that the query will be extremely fast.
Measure the actual performance impact before making any changes.
If you see there is a performance impact, there are many caching strategies that you can use to reduce trips to the database. More information about access patterns to the table and the need for information consistency would be needed to recommend a specific caching strategy.
You state
to get the same information over and over again
but also state
once every 15 minutes
If the information really is the same over and over, you can load it once into the ASP.Net cache at application start. If it might change every so often, but it is OK for the data to be a little out-of-date for a given user, you can use a time-based cache expiration policy. If the data changes only every so often but must be up-to-date immediately after it changes, you can consider a SQL Dependency for cache expiration.
For more information on ASP.Net caching see
http://msdn.microsoft.com/en-us/library/xsbfdd8c(v=vs.100).aspx
and specifically
http://msdn.microsoft.com/en-us/library/6hbbsfk6(v=vs.100).aspx
My suggestion would be to create a WCF windows service - using REST you could easily cache the SQLDataReader (or other DataReader) and implement a TTL metric to re-query at an interval.
Well,there is few solutions.
If you want to load data in memory every 15 minutes you should use some of the .net caching library's,for example system .NET Caching where you could set expiration polices,and other.
You could try optimize you query with nonclustered indexes
You could use App Fabric caching,or something similar
And last,try to add more memory on sql server server
I've got a SaaS application which is being deployed to clients as a ClickOnce app, where the client connects to my cloud server via a WCF NetTcp connection. The data is all stored on the server, but the client needs to be able to see his data.
It's all working, but I'm having a bit of trouble when there's a lot of data to transfer, e.g. in one table a client might have about 3,000 records, and that takes an awful long time to come through the WCF connection. So that's problem 1: how to pull the necessary data from the server. Right now, trying to do it synchronously, it's simply timing out. I could up the timeout limits, but that feels a bit too much like brute force. Would you recommend some kind of asynchronous solution - and if so, how would you do that through WCF?
Problem 2: having got all this data down once, it would make sense to cache it and do some kind of background synchronization to make sure it stays fresh. But how to cache it? Should I ship a SQL Express DB with the ClickOnce app? Or is there a simpler way? And where can you save the cache data, bearing in mind the sandbox restrictions of a ClickOnce app?
For problem 1, you can invoke the operation that takes a long time asynchronously. Also consider not retrieving all 3000 records at once. If your use case allows, you could page the data.
For problem 2, yes you could cache it. You could do something as simple as an in-memory thread safe dictionary, or a thread safe singleton instance of the data if that makes sense. If you need to persist to disk, I would opt for a file based database like SQL CE or SQLLite so that there is no client installation required.
Depending on the data and how much of it is allowed to be cached or synchronised you could look at the replication features in SQL Compact 3.5 (not 4.0) as this supports partial synchronisation with SQL server.
Failing that you could store the data in your own database locally (for this I'd recommend SQL Compact 4.0 rather than SQL Express) and then you'd have to have your own updating logic.
Alternatively if you don't need to cache the data for long you could use the Enterprise Library caching blocks and store the data in Application Storage. This method would be best suited if you wanted to cache queries and results rather than individual records.
Following on from this question...
What to do when you’ve really screwed up the design of a distributed system?
... the client has reluctantly asked me to quote for option 3 (the expensive one), so they can compare prices to a company in India.
So, they want me to quote (hmm). In order for me to get this as accurate as possible, I will need to decide how I'm actually going to do it. Here's 3 scenarios...
Scenarios
Split the database
My original idea (perhaps the most tricky) will yield the best speed on both the website and the desktop application. However, it may require some synchronising between the two databases as the two "systems" so heavily connected. If not done properly and not tested thouroughly, I've learnt that synchronisation can be hell on earth.
Implement caching on the smallest system
To side-step the sync option (which I'm not fond of), I figured it may be more productive (and cheaper) to move the entire central database and web service to their office (i.e. in-house), and have the website (still on the hosted server) download data from the central office and store it in a small database (acting as a cache)...
Set up a new server in the customer's office (in-house).
Move the central database and web service to the new in-house server.
Keep the web site on the hosted server, but alter the web service URL so that it points to the office server.
Implement a simple cache system for images and most frequently accessed data (such as product information).
... the down-side is that when the end-user in the office updates something, their customers will effectively be downloading the data from a 60KB/s upload connection (albeit once, as it will be cached).
Also, not all data can be cached, for example when a customer updates their order. Also, connection redundancy becomes a huge factor here; what if the office connection is offline? Nothing to do but show an error message to the customers, which is nasty, but a necessary evil.
Mystery option number 3
Suggestions welcome!
SQL replication
I had considered MSSQL replication. But I have no experience with it, so I'm worried about how conflicts are handled, etc. Is this an option? Considering there are physical files involved, and so on. Also, I believe we'd need to upgrade from SQL express to SQL non-free, and buy two licenses.
Technical
Components
ASP.Net website
ASP.net web service
.Net desktop application
MSSQL 2008 express database
Connections
Office connection: 8 mbit down and 1 mbit up contended line (50:1)
Hosted virtual server: Windows 2008 with 10 megabit line
Having just read for the first time your original question related to this I'd say that you may have laid the foundation for resolving the problem simply because you are communicating with the database by a web service.
This web service may well be the saving grace as it allows you to split the communications without affecting the client.
A good while back I was involved in designing just such a system.
The first thing that we identified was that data which rarely changes - and immediately locked all of this out of consideration for distribution. A manual process for administering using the web server was the only way to change this data.
The second thing we identified was that data that should be owned locally. By this I mean data that only one person or location at a time would need to update; but that may need to be viewed at other locations. We fixed all of the keys on the related tables to ensure that duplication could never occur and that no auto-incrementing fields were used.
The third item was the tables that were truly shared - and although we worried a lot about these during stages 1 & 2 - in our case this part was straight-forwards.
When I'm talking about a server here I mean a DB Server with a set of web services that communicate between themselves.
As designed our architecture had 1 designated 'master' server. This was the definitive for resolving conflicts.
The rest of the servers were in the first instance a large cache of anything covered by item1. In fact it wasn't a large cache but a database duplication but you get the idea.
The second function of the each non-master server was to coordinate changes with the master. This involved a very simplistic process of actually passing through most of the work transparently to the master server.
We spent a lot of time designing and optimising all of the above - to finally discover that the single best performance improvement came from simply compressing the web service requests to reduce bandwidth (but it was over a single channel ISDN, which probably made the most difference).
The fact is that if you do have a web service then this will give you greater flexibility about how you implement this.
I'd probably start by investigating the feasability of implementing one of the SQL server replication methods
Usual disclaimers apply:
Splitting the database will not help a lot but it'll add a lot of nightmare. IMO, you should first try to optimize the database, update some indexes or may be add several more, optimize some queries and so on. For database performance tuning I recommend to read some articles from simple-talk.com.
Also in order to save bandwidth you can add bulk processing to your windows client and also add zipping (archiving) to your web service.
And probably you should upgrade to MS SQL 2008 Express, it's also free.
It's hard to recommend a good solution for your problem using the information I have. It's not clear where is the bottleneck. I strongly recommend you to profile your application to find exact place of the bottleneck (e.g. is it in the database or in fully used up channel and so on) and add a description of it to the question.
EDIT 01/03:
When the bottleneck is an up connection then you can do only the following:
1. Add archiving of messages to service and client
2. Implement bulk operations and use them
3. Try to reduce operations count per user case for the most frequent cases
4. Add a local database for windows clients and perform all operations using it and synchronize the local db and the main one on some timer.
And sql replication will not help you a lot in this case. The most fastest and cheapest solution is to increase up connection because all other ways (except the first one) will take a lot of time.
If you choose to rewrite the service to support bulking I recommend you to have a look at Agatha Project
Actually hearing how many they have on that one connection it may be time to up the bandwidth at the office (not at all my normal response) If you factor out the CRM system what else is a top user of the bandwidth? It maybe the they have reached the point of needing more bandwidth period.
But I am still curious to see how much information you are passing that is getting used. Make sure you are transferring efferently any chance you could add some easy quick measures to see how much people are actually consuming when looking at the data.