Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We have a .NET application using ADO.NET and Entity Framework and a lot of legacy stored procedures. Occasionally an operation will bring the database server's CPU to 100% for seconds or even minutes. During this time no other operations can be executed against the database. There is some culprit code which is far too complex and business critical to be feasible refactoring in the short term, but this can also happen from newer code depending on the situation.
I would like to prevent any one SQL operation from taking 100% of the CPU, etc. Is there any way to configure MS SQL to provide no more than, say 20% of the CPU, to any one query?
I know that ideally we would rewrite the code to not be as intensive, but that is not feasible in the short term, so I'm looking for a general setting which ensure this can never happen.
Take a look at the Resource Governor (assuming you're using SQL 2008 or up). A good simple overview on usage is here. Though it won't work necessarily on a specific query, using a reasonable classifier function will/should allow you to narrow it down pretty closely if you like. I don't have 10 rep yet so I can only post 2 links, but if you google "Sql Server classifier function" you'll get some decent guidance.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a web page where I need to get fields from many tables in SQL that can all join. Is it better in terms of performance to do a few queries to the database or one big SQL statement. I'm using MVC with Web API calls.
Thanks.
In the past I've created DB Views to define the data I'm looking for. Depending on the data access framework you are using this can also be helpful in returning the data and translating it to objects.
As far as performance goes, I'm most familiar with SQL Server, and in the Management Studio there is an option to "Include Actual Execution Plan". This will show you a full breakdown of your JOIN statements, what indexes are being used if any, and will suggest indexes to speed performance. I recommend this tool to all developers on my teams when they are stuck with a slow performing page.
One other thing to note, the database configuration also makes a difference. If you are running a local database you will have fewer concerns than if you were running a cloud based database (Azure SQL, etc) as those have management overhead beyond your control when it comes to availability and physical location of your instance at any given time.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm developing my financial app for simple operations. But I need to be able to make note of any changes made to my records. I use C# ASP.NET for my application code and MS SQL 2014 for my DB.
My question is: which is better ?
Application Layer: write a method in code that creates a record in a history table in DB?
Database Layer: write trigger/procedure that creates a record in a history table in DB ?
My main concern is performance - the app is mounted on a IIS server in my home so i need to fine tune this so it will be lag-less for my users and doesn't put a big workload on my server.
This has a lot to do with preferences and maintainability.
For the sake of maintainability, I always go with the Layered approach, myself.
Here's an example why.
Let's say you have your SQL Inserts and updates scattered throughout the code. And then, one day, your database changes in such a way, that you would have to look up each and every one of those inserts and updates, and change all of them one by one.
For the sake of argument, you could also create one function that does all of these inserts and updates, and call that specific function every time from your app. That would work, but could become a clutter as you'll eventually end up with a lot of those functions for different tables, views, etc, ...
Now let's say you have used the layered approach. You would then simply find the class that does all updates and inserts to one specific table and simply do your changes there. The rest of the application perhaps doesn't even need to be aware of this change.
Performance is (in my opinion) not really a factor. Making an object isn't expensive at all, but is hardly measurably as it is. So I'd say go with the layered approach.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I guess what I'm asking is running my application under Mono going to be notably bad for performance? Note that the application is not meant to do or require anything other than access to a local relational database (probably MySQL).
Edit: The application is meant to do in-memory work with data queried from the database. The database itself should not be a bottleneck.
Also, the 'work' will be multi-threaded and (must be anticipated to be) as much "parallel" as "serial", if that makes sense...
Edit 2: Profiling hasn't yet been done as the product is only now coming out of a long planning phase to begin development, but the plan was made with anticipation of likely use making this mostly memory-intensive by design (so as to eventually allow the database itself to be made to do as much work as possible, ideally). However, cases of "serial" work (i.e. number crunching) must be expected to occur by design, but my goal is to eliminate these cases as much as possible.
Edit 3: By number crunching, I mean literally any math formula serialized into the database and called into use for representing some kind of abstract data. Most of my (eventual) work will be to minimize this, however.
Your question indicates that you assume that Mono is some kind of .NET emulation, like wine is a Win32 emulation. This is not the case.
Mono is a native implementation of the .NET framework, so there is no reason why it should be fundamentally and/or generally slower than the implementation of the .NET framework on Windows.
It is a bad idea. I ported my application to mono and it doesn't work well:
1. Mono is not stable when it comes to threads (try to load many threads and see how it goes down)
2. Mono does not behave as expected with Forms
3. You are building CPU intensive app and Mono is very slow (as well as .NET) so use cpp, Python or something else.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When I read pro and con lists of using Entity Framework (or any modern ORM really), I'm surprised that the following point doesn't arise (self quote):
Using strongly-typed domain entities allows for type checking at
compile-time which essentially performs a verification of all your
database operations. This is something that is not possible with
ADO.NET (whether using inline SQL or stored procedures).
For me, this is one of the biggest advantages of using an ORM. An issue that I come across regularly when dealing with ADO.NET based applications are the run-time errors from SQL. Static checking completely eliminates this.
Could anyone elaborate as to why this isn't hugely relevant to many developers?
Oh it's great.
It's also not free. EF is essentially built on top of ADO.net, it just uses reflection to convert back-and-forth between strongly typed classes and the actual column names. This is fine if your recordset is fairly small, but it's very noticeable when you start dealing with larger sets of data.
Generally this extra lag time isn't important because if, say, the DB takes two seconds to pull the data up to begin with, what difference does an extra millisecond (or even a second) make. But there are situations where speed is critically important, and in those situations you pretty much have to write your own constructs using raw ADO.
The same question was asked here. It includes some good answers.
Programmers.stackexchange.com was a more appropriate place to ask the question.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to learn RavenDB and using it's .Net client. If I am not wrong I think the main reason to use NoSql database like RavenDB is speed. It seems to be very fast as it is not relational. However when I am trying to play with .Net client of RavenDB I find that all calls are REST based calls. Doesn't that slow down speed? For each call of adding document it makes some call to HILO which basically lets the .Net client know which should be the next unique number to use and then it makes 2nd call to store the actual document.
You seems to be running RavenDB in a console app, and checking what happens in a very short lived fashion.
Run RavenDB in a real app over time, and you'll see that it is highly optimized for network usage.
You only see this hilo call once per X number of updates, and that X number changes based on your actual usage scenarios.
The more you use RavenDB, the faster it becomes.