Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a web page where I need to get fields from many tables in SQL that can all join. Is it better in terms of performance to do a few queries to the database or one big SQL statement. I'm using MVC with Web API calls.
Thanks.
In the past I've created DB Views to define the data I'm looking for. Depending on the data access framework you are using this can also be helpful in returning the data and translating it to objects.
As far as performance goes, I'm most familiar with SQL Server, and in the Management Studio there is an option to "Include Actual Execution Plan". This will show you a full breakdown of your JOIN statements, what indexes are being used if any, and will suggest indexes to speed performance. I recommend this tool to all developers on my teams when they are stuck with a slow performing page.
One other thing to note, the database configuration also makes a difference. If you are running a local database you will have fewer concerns than if you were running a cloud based database (Azure SQL, etc) as those have management overhead beyond your control when it comes to availability and physical location of your instance at any given time.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We have a .NET application using ADO.NET and Entity Framework and a lot of legacy stored procedures. Occasionally an operation will bring the database server's CPU to 100% for seconds or even minutes. During this time no other operations can be executed against the database. There is some culprit code which is far too complex and business critical to be feasible refactoring in the short term, but this can also happen from newer code depending on the situation.
I would like to prevent any one SQL operation from taking 100% of the CPU, etc. Is there any way to configure MS SQL to provide no more than, say 20% of the CPU, to any one query?
I know that ideally we would rewrite the code to not be as intensive, but that is not feasible in the short term, so I'm looking for a general setting which ensure this can never happen.
Take a look at the Resource Governor (assuming you're using SQL 2008 or up). A good simple overview on usage is here. Though it won't work necessarily on a specific query, using a reasonable classifier function will/should allow you to narrow it down pretty closely if you like. I don't have 10 rep yet so I can only post 2 links, but if you google "Sql Server classifier function" you'll get some decent guidance.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am working on an application that needs to save and retrieve data fastly from database.I am using MVC5 razor view with C#.
Scenario may be like there is a post that can be like/unlike by any user and data will be store in database, also user can share that post. For example, facebook. on facebook we like/unlike posts and it is works very fast.
Can any one tell me which database tool**(Sql Server, MySql, Oracle etc)** should I use and which data approach**(Entity framework, Store Procedures, ORM, NoSql etc)** should I use in my scenario ?
Thanks in advance....
Selection of the database completely depends on you.
and it is also dependent on the cost as mysql is free of cost.
You have to identify your requirements and then need to select the database tool.
if you are only considering the speed then you need to read about Entity framework, Store Procedures, ORM, NoSql.
These are the different approaches used for differnt purpose
As Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language.
So it completely dendends on your requirement.
But first i will suggest you to read more about these concepts.
thses two links will definitely help you to understand more.
https://kevinlawry.wordpress.com/2012/08/07/why-i-avoid-stored-procedures-and-you-should-too/
http://www.davidwaynebaxter.com/tech/dev/orm-or-sprocs/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm developing my financial app for simple operations. But I need to be able to make note of any changes made to my records. I use C# ASP.NET for my application code and MS SQL 2014 for my DB.
My question is: which is better ?
Application Layer: write a method in code that creates a record in a history table in DB?
Database Layer: write trigger/procedure that creates a record in a history table in DB ?
My main concern is performance - the app is mounted on a IIS server in my home so i need to fine tune this so it will be lag-less for my users and doesn't put a big workload on my server.
This has a lot to do with preferences and maintainability.
For the sake of maintainability, I always go with the Layered approach, myself.
Here's an example why.
Let's say you have your SQL Inserts and updates scattered throughout the code. And then, one day, your database changes in such a way, that you would have to look up each and every one of those inserts and updates, and change all of them one by one.
For the sake of argument, you could also create one function that does all of these inserts and updates, and call that specific function every time from your app. That would work, but could become a clutter as you'll eventually end up with a lot of those functions for different tables, views, etc, ...
Now let's say you have used the layered approach. You would then simply find the class that does all updates and inserts to one specific table and simply do your changes there. The rest of the application perhaps doesn't even need to be aware of this change.
Performance is (in my opinion) not really a factor. Making an object isn't expensive at all, but is hardly measurably as it is. So I'd say go with the layered approach.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I build a framework that is for high traffic site, the site only for other application site log records.
I used redis or file in the middle layer to cache log
In a certain period of time, the program will take cache into the database.
Because there is a large amount of data, I need a ORM what is lightweight and agile .
I use Mysql5.x, because the SQL server of the relatively high price.
I am more familiar with ibatis (for Java),but ibatis.net was not update for two years, so, ibatis.net whether can meet the requirements?
if ibatis can do it,I can reduce a lot of learning time.
Or are you any better suggestions?
c#'s new technology, I am not very familiar with, please pointing
For anything high performance, I use Dapper. It's a micro-ORM with very high performance developed and used by the very website your are seeing! (stackoverflow.com).
It has a NuGet package too, which you can install by Install-Package Dapper command in package manager console.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When I read pro and con lists of using Entity Framework (or any modern ORM really), I'm surprised that the following point doesn't arise (self quote):
Using strongly-typed domain entities allows for type checking at
compile-time which essentially performs a verification of all your
database operations. This is something that is not possible with
ADO.NET (whether using inline SQL or stored procedures).
For me, this is one of the biggest advantages of using an ORM. An issue that I come across regularly when dealing with ADO.NET based applications are the run-time errors from SQL. Static checking completely eliminates this.
Could anyone elaborate as to why this isn't hugely relevant to many developers?
Oh it's great.
It's also not free. EF is essentially built on top of ADO.net, it just uses reflection to convert back-and-forth between strongly typed classes and the actual column names. This is fine if your recordset is fairly small, but it's very noticeable when you start dealing with larger sets of data.
Generally this extra lag time isn't important because if, say, the DB takes two seconds to pull the data up to begin with, what difference does an extra millisecond (or even a second) make. But there are situations where speed is critically important, and in those situations you pretty much have to write your own constructs using raw ADO.
The same question was asked here. It includes some good answers.
Programmers.stackexchange.com was a more appropriate place to ask the question.