It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have an aspx website with many pages which accept user input, pull data from a SQL database, do some heavy data processing then finally present the data to the user.
The site is getting bigger and bigger and it is starting to put a lot of stress on the server.
What I want to do is to maybe separate things a bit:
Server-A will host the website, the site will accept input from users and pass those parameters to applications running on Server-B
Server-B will fetch data from SQL, do the heavy data processing, then pass a dataset or datatable object back to the website.
Is this possible?
Sure, this is called an N-Tier Architecture.
The most obvious thing to separate is one database server, tuned to meet the demands of a database (fast disks, lots of RAM) and one or more separate web servers.
You can expand on that by placing an application tier between the web server and the database server. The application tier can accept the user input that was collected in the web tier, interact with the database, do the heavy crunching, and return the result to the web tier. Most typically, you would use Windows Communication Foundation (WCF) to expose the functionality of the application tier to the web server(s). Application servers might often be tuned to have very fast CPU's and might have slower disks and possibly less memory than database servers, depending on exactly what they need to do. The beauty of this solution is that you can just add more and more identical application servers as the load on your application grows.
Based on the business model, you need some caching strategy to prevent heavy calculation for each input.
Consider an stock website. Although there are many transactions each minute, they won't update market trend for each of them. They can schedule an update based on something like, defined intervals (hourly, daily...), defined number of interactions (based on count of value) etc.
The task should be done when the server load is low. This way visitors see the stock trend on main page while it is accurate enough.
For such heavy load scenarios, good design is everything since sometimes even the expensive hard-wares will not help much.
If you like, share some info about what is going done.
i would use a load balancer like an F5
that way your architecture does not change and
but i would use the ntier approach to split your site into a data and presentation layer
then the load balancer will direct each request to the server with the lightest load
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have been searching for a service that could do something like, notify a user (specific user) that they have a new friend request. I came across SingalR and thought this may be something that might be useful to my application. I see alot of the examples and live uses of SignalR are chat application, which makes sense. Anyway here is what I am trying to accomplish here. I have a MVC social application that uses RavenDB as the datastore, A user may request friendship with another user, I would like to update that client in realtime that they have a new request (something that checks every X seconds). I am either looking for a good SignalR example, or documentation (hopefully example) that may point me in the right direction, or a good service other than SignalR that would suit my app better. Thanks for any answers.
SignalR would definitely suit your app well. JabbR (http://jabbr.net/, https://github.com/davidfowl/JabbR) for instance may be a chat room but it is constantly reaching out to the database to update/retrieve its records.
For your case I'd recommend queuing up a command on database writes to notify other users rather than checking periodically. Meaning lets say user A requests to be friends with user B. First that request is written to the database and then it broadcasts a message via SignalR to all parties involved.
However, if you still would like to implement a timer check every X seconds this is still possible. See ShootR (shootr.signalr.net, https://github.com/NTaylorMullen/ShootR), a multiplayer game that utilizes a game timer and broadcasts collisions when it detects them. Granted ShootR is doing calculations on the server at a much higher interval (50+ times / second) it's essentially the same.
Therefore if you want to take the check every Xs approach I'd suggest taking a hybrid of the two projects (JabbR & ShootR) and implementing a threaded timer (instead of a custom timer used for high frequency updates which is what ShootR uses) and then retrieving data from the database and using that data to send updates to users.
Hope this helps!
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have method which calls a stored procedure 300 times in a for loop and each time the stored procedure returns me 1200 records. How can i improve this ? I cannot eliminate the 300 calls but is there any otherways i can try out. I am using REST service impletemented through ASP.NET and using IBATIS for database connectivity
I cannot eliminate the 300 calls
Eliminate the 300 calls.
Even if all you can do is to just add another stored procedure which calls the original stored procedure 300 times, aggregating the results, you should see a massive performance gain.
Even better if you can write a new stored procedure that replicates the original functionality but is structured more appropriately for your specific use case, and call that, once, instead.
Making 300 round trips between your code and your database quite simply is going to take time, even where the code and the database are on the same system.
Once this bit of horrible is resolved, there will be other things you can look to optimise, if required.
Measure.
Measure the amount of time spent inside the server-side code. Measure the amount of that time that is spent in the stored procedure. Measure the amount of time spent at the client part. Do some math, and you have a rough estimate for network time and other overheads.
Returning 1200 records, I would expect network bandwidth to be one of the main issues; you could perhaps investigate whether a different serialization engine (with the same output type) might help, or perhaps whether adding compression (gzip / deflate) support would be beneficial (meaning: reduced bandwidth being more important than the increased CPU required).
Latency might be important if you are calling the REST service 300 times; maybe you can parallelize slightly, or make fewer big calls rather than lots of small calls.
You could batch the SQL code, so you only make a few trips to the DB (calling the SP repeatedly in each) - that is perfectly possible; just use EXEC etc (still using parameterization).
You could look at how you are getting the data from ADO.NET to the REST layer. You mention IBATIS, but have you checked whether this is fast / slow compared to, say, "dapper" ?
Finally, the SP performance itself can be investigated; indexing or just a re-structuring of the SP's SQL may help.
Well, if you have to return 360,000 records, you have to return 360,000 records. But do you really need to return 360,000 records? Start there and work your way down.
Without knowing too much of the details, the architecture appears flawed. On one hand its considered unreasonable to lock the tables for the 6 seconds it takes to retrieve the 360,000 records using a single S.P. execution, but it fine to return a possibly inconsistent set of 360,000 records that are retrieved via multiple S.P. executions. It makes me wonder what exactly are you trying to implement and if there is a better way to design the integration between the client and the server.
For instance, if the client is retrieving a set of records that have been created since the last request, then maybe a paged ATOM feed would be more appropriate.
What ever it is you are doing, 360,000 records is a lot of data to move between the server and the client and we should be looking at the architecture and purpose of that data transfer to make sure the current approach is appropriate.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Let's imagine any common operation being executed on website.
After user presses button the application should:
Check conditions whether the operation is allowed or not (user rights, some object's consistency, relations and other things)
Update DB record
Report
This is business-logic layer's concern as it's written in tons of books.
In fact, we firstly read data from DB then we write data to DB. And in the case when data were changed by any other user/process during our checkings we will put invalid result into the database. Seemes like the problem should be well-known but i still can't find good solution for this.
The question is: why do we need business logic layer with no opportunity to maintenance business transactions?
You probably would say TransactionScope. Well, how do you prevent read data from its changes by external processes? How it's possible to UPDLOCK from business layer? If it's possible then wouldn't it all be much more expensive than doing transactions in stored procedures?
No way to bring a part of logic to DB - only the whole. Both parts #1 and #2 must be implemented in the same transaction, moreover, read data must be locked until updation has been made.
Ideas?
I really think you are arguing this from the wrong angle. Firstly, in your specific example, there doesn't seem to be anything saying that the change in votes by one user invalidates the attempt of another user to affect an upvote. So, if I opened the page, and there were 200 votes for the item, and I clicked upvote, I don't really care if 10 other people have done the same in the meantime. So, validations can be run by the business layer, and if the result is that the vote can go through, the update can be done in an atomic way using a single SQL statement (E.g. UPDATE Votes SET VoteCount = VoteCount+1 WHERE ID=#ID), or a select with UPDLOCK and update wrapped in a transaction. The tendency for ORMs and developers to go with the latter approach is neither here nor there, you still have the option to change the implementation if you so choose.
Now, if the requirement is that an update to the vote count actually invalidates my vote, then it's a completely different situation. In this case, we are absolutely correct to use either optimistic or pessimistic concurrency, but these are (obviously) not applicable to a website where hundreds of people may vote at the same time, for the same item. The issue here is not the implementation, it's the nature of allowing multiple people to work on the same item.
So, to summarise, there's nothing stopping you from having a business layer outside of the DB and keeping the increment atomic. At the same time, you hopefully enjoy the benefits of having your business logic outside of the DB (which is a post in itself, but I'd argue that it's a large benefit).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I need to build a server to accept client connection with a very high frequency and load (each user will send a request each 0.5 seconds and should get a response in under 800ms, I should be able to support thousands of users on one server). The assumption is that the SQL Server is finely tuned and will not pose a problem. (assumption that of course might not be true)
I'm looking to write a non-blocking server to accomplish this. My back end is an SQL Server which is sitting on another machine. It doesn't have to be updated live - so I think I can cache most of the data in memory and dump it to the DB every 10-20 seconds.
Should I write the server using C# (which is more compatible with SQL Server)? maybe Python with Tornado? What should be my considerations when write a high-performance server?
EDIT: (added more info)
The Application is a game server.
I don't really know the actual traffic - but this is the prognosis and the server should support it and scale well.
It's hosted "in the cloud" in a Datacenter.
Language doesn't really matter. Performance does. (a Web service can be exposed on the SQL Server to allow other languages than .NET)
The connections are very frequent but small (very little data is returned and little computations are necessary).
It should hold most of the data in the memory for fastest performance.
Any thoughts will be much appreciated :)
Thanks
Okay, if you REALLY need high performance, don't go for C#, but C/C++, it's obvious.
In any case, the fastest way to do server programming (as far as I know) is to use IOCP (I/O Completion Ports). Well, that's what I used when I made a MMORPG server emulator, and it performed faster than the official C++ select-based servers.
Here's a very complete introduction to IOCP in C#
http://www.codeproject.com/KB/IP/socketasynceventargs.aspx
Good luck !
Use the programming language that you know the most. It's a lot more expensive to hunt down performance issues in an large application that you do not fully understand.
It's a lot cheaper to buy more hardware.
People will say C++, because garbage collection in .Net could kill your latency. You could avoid garbage collection though if you were clever, by reusing existing managed objects.
Edit: your assumption about SQL Server is probably wrong. You need to store your state in memory for random access. If you need to persist changes, journal them to the filsystem and consolidate them with the database infrequently
Edit 2: You will have a lot different threads talking to the same data. In order to avoid blocking and deadlocks, learn about lock-free programming (Interlocked.CompareExchange etc)
I was part of a project that included very high-performance server code, which actually included the ability to response with a TCP packet within 12 milliseconds or so.
We used C# and I must agree with jgauffin - a language that you know is much more important than just about anything.
Two tips:
Writing to console (especially in color) can really slow things down.
If it's important for the server to be fast at the first requests, you might want to use a pre-JIT compiler to avoid JIT compilation during the first requests. See Ngen.exe.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am looking to build a web application that will allow users to manage simple tasks or todos. As part of this, I'd like to send users email reminders daily of their tasks. I'd like each user in the system to specify what time of day they get their reminder, though. For example, one user can specify to get their emails at 8AM while another user might choose to get their emails at 7PM. Using .Net, what is the best way to architect and build this "reminder" service? Also, as a future phase to this, I'd like to expand the reminders to SMS text and/or other forms of notification.
Any guidance would be appreciated.
Your question is a little broad, but there are two general approaches to handling reoccurring tasks on Windows:
Write a service, and design it to check for tasks periodically. You might have it poll a database every quantum, and execute any reminders due at that time (marking off those it's completed).
Run a program as a Windows scheduled task (the Microsoft equivalent of cron). Run it every hour, and check a database as above.
I'm assuming you know how to send email from .NET - that's pretty straightforward, and most carriers have mail-to-SMS gateways.
I suppose you could do it two ways:
Windows Service running in the background, which polls the DB, looks for items at the current time, sleeps if it finds nothing, loops
This is fine enough, but may not scale well if suddenly there are 1,000s of items to process in 30 seconds, or so, as it will take too long to do it. You can get around this by building it with MSMQ in mind, which allows distribution over different machines, and so on.
Similar Windows Service, but have it interactable via some sort of pulse/wait system, which is fired each time a new DB entry gets made. Then it can legitimately sleep, but is kind of "brittle".
I'd probably go with the first approach.
I would create a service that runs all the time and checks once a minute for task to preform. You could then preform what ever actions you need. you can create a website for users to use that goes to a database that the service also reads off of. If you want to get a little more fancy and a WCF interface to your service have the website go to the service and store your needed reminders in the database.