How to cache specific parts of the page with ASP.NET MVC? - c#

I'm making a "reputation" type feature on my site similar to stack overflow.
I want to display their points on by their name in the header.
I don't want to have to do a db call just for this every time they move to another page.
Can anyone suggest a nice way to implement this?

Don't make a db call just for this but get the data from the database while you're getting everything else.
Follow the principle of One Request Response cycle, one database call. It's really just a matter of joining in the other tables in your query.
For instance, if you look at a page that has a question and a bunch of answers. You know the "user" of the original question and you know the "user" of each of the answers and comments. So while you're getting the information also join in the pertinent information of each of the users. It's really just joining in additional tables and you can effectively get all of the data you require in one database call.
When you make a call to a database (like MSSQL server) you can get back multiple resultsets in one call, so you're also not limited to a single result set.
Edit:
Here is a sample Stored procedure (or just SQL statements) that results in 5 result sets in one database call.
ALTER PROCEDURE [dbo].[GetPostWithSlug]
#PostSlug varchar(80)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #PostId int;
SELECT #PostId = PostId
FROM dbo.Post
WHERE PostSlug = #PostSlug
SELECT PostId, PostDate, PostSlug, dbo.[User].UserFirstName + ' ' + dbo.[User].UserLastName AS PostedBy, PostTitle, PostText, PostIsPublished, PostIsPublic, PostTitleImg, PostUpdatedDate, dbo.[User].UserWebsite AS AuthorWebsite
FROM dbo.Post
INNER JOIN
dbo.[User]
ON dbo.[User].UserId = dbo.Post.UserId
WHERE PostId = #PostId
/* SubCategories for Post */
SELECT dbo.PostSubCategory.PostId, dbo.Category.CategoryId, dbo.SubCategory.SubCategoryId, dbo.Category.CategoryDescription, dbo.SubCategory.SubCategoryDescription
FROM dbo.Category
INNER JOIN
dbo.SubCategory ON dbo.Category.CategoryId = dbo.SubCategory.CategoryId
INNER JOIN
dbo.PostSubCategory ON dbo.SubCategory.SubCategoryId = dbo.PostSubCategory.SubCategoryId
INNER JOIN
dbo.Post ON dbo.Post.PostId = dbo.PostSubCategory.PostId
WHERE dbo.Post.PostId = #PostId
ORDER BY dbo.PostSubCategory.PostId
/* Tags for Post */
SELECT dbo.PostTag.PostId, dbo.Tag.TagId, dbo.Tag.TagDescription
FROM dbo.PostTag
INNER JOIN
dbo.Tag ON dbo.Tag.TagId = dbo.PostTag.TagId
INNER JOIN
dbo.Post ON dbo.Post.PostId = dbo.PostTag.PostId
WHERE dbo.Post.PostId = #PostId
ORDER BY dbo.PostTag.PostId, dbo.Tag.TagDescription
/* Custom Groups for Post */
SELECT dbo.PostCustomGroup.PostId, dbo.CustomGroupHeader.CustomGroupHeaderId, dbo.CustomGroup.CustomGroupId
FROM dbo.CustomGroup
INNER JOIN
dbo.CustomGroupHeader ON dbo.CustomGroupHeader.CustomGroupHeaderId = dbo.CustomGroup.CustomGroupHeaderId
INNER JOIN
dbo.PostCustomGroup ON dbo.PostCustomGroup.CustomGroupId = dbo.CustomGroup.CustomGroupId
WHERE dbo.PostCustomGroup.PostId = #PostId
EXEC GetCommentsForPost #PostId
END
In C# code, your DbDataReader will hold multiple result sets and these can be got by calling NextResult(). Or if you're filling a DataSet you can simply use the Fill method of a adapter.
I use this technique on my blog Matlus - Internet Technology and Software Engineering and you can see how fast (responsive) the site is. The same technique is used on ExposureRoom which is a public social networking website that has a huge amount of traffic.

You're looking for some way to do donut hole caching:
https://stackoverflow.com/search?q=[asp.net-mvc]+donut+hole+caching

If it's only a small part of part of page that needs to be cached, you could use child action to render that part, and cache it. Example: http://www.asp.net/learn/whitepapers/mvc3-release-notes#_Toc276711791

Related

SQL comments not preserved when viewing in Azure query details

We're using .NET Entity Framework to talk to an Azure SQL database. We used QueryOriginInterceptor to add some comments to the top of each SQL command being sent to SQL Server, with the goal of helping identify the location where a particular query came from in the code.
The problem is, when looking at long running queries in the Azure UI (and looking in sys.dm_exec_query_stats), the comments are not there.
For example, if we run this query:
-- Stack:
-- Utils.Orders.GetOrders
select *
from [Order] o
join OrderItem oi on oi.OrderId = o.ID
And looking in Azure, the long running query looks like:
Is there a way to preserve these comments?
sys.dm_exec_query_stats does not include the comments, but dm_exec_sql_text does.
This artice explains how to use the two to diagnose issues.
The relevant SQL query from the article is:
SELECT TOP 25
databases.name,
dm_exec_sql_text.text AS TSQL_Text,
CAST(CAST(dm_exec_query_stats.total_worker_time AS DECIMAL)/CAST(dm_exec_query_stats.execution_count AS DECIMAL) AS INT) as cpu_per_execution,
CAST(CAST(dm_exec_query_stats.total_logical_reads AS DECIMAL)/CAST(dm_exec_query_stats.execution_count AS DECIMAL) AS INT) as logical_reads_per_execution,
CAST(CAST(dm_exec_query_stats.total_elapsed_time AS DECIMAL)/CAST(dm_exec_query_stats.execution_count AS DECIMAL) AS INT) as elapsed_time_per_execution,
dm_exec_query_stats.creation_time,
dm_exec_query_stats.execution_count,
dm_exec_query_stats.total_worker_time AS total_cpu_time,
dm_exec_query_stats.max_worker_time AS max_cpu_time,
dm_exec_query_stats.total_elapsed_time,
dm_exec_query_stats.max_elapsed_time,
dm_exec_query_stats.total_logical_reads,
dm_exec_query_stats.max_logical_reads,
dm_exec_query_stats.total_physical_reads,
dm_exec_query_stats.max_physical_reads,
dm_exec_query_plan.query_plan,
dm_exec_cached_plans.cacheobjtype,
dm_exec_cached_plans.objtype,
dm_exec_cached_plans.size_in_bytes
FROM sys.dm_exec_query_stats
CROSS APPLY sys.dm_exec_sql_text(dm_exec_query_stats.plan_handle)
CROSS APPLY sys.dm_exec_query_plan(dm_exec_query_stats.plan_handle)
INNER JOIN sys.databases
ON dm_exec_sql_text.dbid = databases.database_id
INNER JOIN sys.dm_exec_cached_plans
ON dm_exec_cached_plans.plan_handle = dm_exec_query_stats.plan_handle
WHERE databases.name = 'AdventureWorks2014'
ORDER BY dm_exec_query_stats.max_logical_reads DESC;

Scan entire DB programmatically

I have recently inherited a set of very large SQL Server databases. the application and database schema are a mess. I have run across a few fields in the database that store different types of sensitive data, where they should not be stored. since there are almost 10,000 tables in my database, I am in desperate need of a way to programmatically scan a few of these databases to find out where the data is. I realize this will be very resource intensive, so I have setup a server specifically to run a scan on backups of the databases.
I also have zero dollars for purchasing any tools.
Does anyone know of a way with C# and SQL that I can scan all user tables in the database for sensitive data?
an example of scanning for one type of data (eg. SSN) would be extremely helpful. I confident that I can extrapolate that into all the scenarios I would need.
this sql will list all user tables and row counts in a database. It will be a starting point..
SELECT o.name,
ddps.row_count
FROM sys.indexes AS i
INNER JOIN sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID
AND i.index_id = ddps.index_id
WHERE i.index_id < 2 AND o.is_ms_shipped = 0 ORDER BY o.NAME
Hth,
O
This query will help you to find the column with particular name and datatype
SELECT t.name AS table_name,
SCHEMA_NAME(t.schema_id) AS schema_name,
c.name AS column_name ,tp.name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
INNER JOIN sys.types tp ON tp.user_type_id=c.user_type_id
WHERE c.name LIKE '%Product%' AND tp.name LIKE '%int%'
ORDER BY schema_name, table_name;
This might be irrelevant at this point of time but shall serve as an additional note: You can use Information Schema Views to query the database objects which comply with the ISO standard definition for the INFORMATION_SCHEMA.
MSDN LINK
If you can open the DB into Microsoft SQL Server Managment Studio, you can try to use ApexSQL . It's a plugin that can be downloaded from here:
http://www.apexsql.com/sql_tools_search.aspx
For example: you select the database and you can look for a column name. It will show you all tables in which you have that column.
Hope it helps.

Most efficient method to load DataSet from subset of multiple joined tables

I have a large inventory system, and I'm having to re-write part of the I/O portion of it. At its heart, there's a product table and a set of related tables. I need to be able to read pieces of it as efficiently as possible. From C# I construct this query:
select * -- includes productid
into #tt
from products where productClass = 547 -- possibly more conditions
select * from #tt;
select * from productHistory where productid in (select productid from #tt);
select * from productSuppliers where productid in (select productid from #tt);
select * from productSafetyInfo where productid in (select productid from #tt);
select * from productMiscInfo where productid in (select productid from #tt);
drop table #tt;
This query gives me exactly the results I need: 5 result sets each having zero, one or more records (if the first returns zero rows, the others do as well, of course). The program then takes those result sets and crams them into an appropriate DataSet. (Which then gets handed off into a constructor expecting just these records.) This query (with differing conditions) gets run a lot.
My question is, is there a more efficient way to retrieve this data?
Re-working this as a single join won't work because each child might return a variable number of rows.
If you have an index on products.productClass this might yield better performance.
select * from products where productClass = 547 -- includes productid
select productHistory.*
from productHistory
join products
on products.productid = productHistory.productid
and products,productClass = 547;
...
If productID is a clustered index then you will probalbly get better permance with
CREATE TABLE #Temp (productid INT PRIMARY KEY CLUSTERED);
insert into #temp
select productid from products where productClass = 547
order by productid;
go
select productHistory.*
from productHistory
join #Temp
on #Temp.productid = productHistory.productid;
A join on a clustered index seems to give the best performance.
Think about it - SQL can match the first and know it can forget about the rest then move to the second knowing it can move foward (not go back to the top).
With a where in (select ..) SQL cannot take advantage of order.
The more tables you need to join the more reason to #temp as you take about 1/2 second hit creating on populating the #temp.
If you are going to #temp you might as well make it a stuctured temp.
Make sure when you JOIN tables you are joining on indexes. Otherwise you will end up with table scans vs index scans and your code will be very slow specially when joining large tables.
Best practice is to optimize your SQL Queries to avoid table scans.
If you don't have it already, I would strongly suggest making this a stored procedure.
Also, I suspect, but can't prove without testing it, that you will get better performance if you perform joins on the products table for each of your subtables rather than copying into a local table.
Finally, unless you can combine the data, I don't think there is a more efficient way to do this.
Without seeing your schema and knowing a little more about your data and table sizes, it's hard to suggest definitive improvements on the query side.
However, instead of "cramming the results into an appropriate DataSet," since you are using a batched command to return multiple result sets, you could use SqlDataAdapter to do that part for you:
SqlDataAdapter adapter = new SqlDataAdapter(cmd);
DataSet results = new DataSet();
adapter.Fill(results);
After that, the first result set will be in results.Tables[0], the second in results.Tables[1], etc.

Modifying sql query,dont want to use join operation

The following query uses join operation, i want the same query to execute without using join operation. Is it possible to do so? If yes than how can i do it?
select jname, jcode from heardt inner join judge ON heardt.jud1 = jcode
The reason i am trying to do this is because i am using odbc connection for mysql and join operations are not getting executed at all as web page is loading for infinitely long and not giving any output. That is the reason why i want to try without using join operation
I don't know your rationale, I find JOINS much easier to read but you can replace them by joining (no pun intented) the tables in the where clause.
select jname
, jcode
from heardt
, judge
where heardt.jud1 = judge.jcode
There is no additional filter on that query. It might cause the query to return many rows. This could cause a slowdown, depending on the number of records in your table.
You should consider limiting the number of returned records from the query.
Something else you need to check, if there is an index on the JCode field
Select jname, jud1 from heardt where not jud1 is null
EDIT: Ok, this was quick. So: Why do you need the 'join'?
The query Select jname, jud1 from heardt where not jud1 is null shows that jud1 has a value, but not that that value is valid. The join or where validates the relationship between the tables.
If your query takes a very long time to execute with the join in place it is most likely that you do not have correct indexes on the tables causing a table scan to take place instead of and indexed search.
i am using odbc connection for mysql and join operations are not getting executed as web page is loading for infinitely long and not giving any output. That is the reason why i want to try without using join operation
That's probably not because your JOIN is not getting executed, but because your JOIN query is taking too long. And that's probably because you don't have the correct index defined (an index, preferably a clustered index, on judge.jcode).
If the join is still taking too long after adding such an index you could consider precaching the query with a table or indexed view (latter however not supported in MySQL).
If you are able to run it SQL Manager you should be able to run it on the ODBC Connection, if not there is something wrong with the way you are instantiating that connection in C#.
Can you post the c# code you are using so we can make a better judged answer for you.
As lieven pointed out I think his solution is a good one
select jname
, jcode
from heardt
, judge
where heardt.jud1 = judge.jcode
But you should create indexes in the fields you are joining, therefore the result will be provided much quickly so add
Create index a1 on heardt(jud1);
Create index a2 on judge(jcode);
I think this is the better possible option

Enhance performance of large slow dataloading query

I'm trying to load data from oracle to sql server (Sorry for not writing this before)
I have a table(actually a view which has data from different tables) with 1 million records atleast. I designed my package in such a way that i have functions for business logics and call them in select query directly.
Ex:
X1(id varchar2)
x2(id varchar2, d1 date)
x3(id varchar2, d2 date)
Select id, x, y, z, decode (.....), x1(id), x2(id), x3(id)
FROM Table1
Note: My table has 20 columns and i call 5 different functions on atleast 6-7 columns.
And some functions compare the parameters passed with audit table and perform logic
How can i improve performance of my query or is there a better way to do this
I tried doing it in C# code but initial select of records is large enough for dataset and i get outofmemory exception.
my function does selects and then performs logic for example:
Function(c_x2, eid)
Select col1
into p_x1
from tableP
where eid = eid;
IF (p_x1 = NULL) THEN
ret_var := 'INITIAL';
ELSIF (p_x1 = 'L') AND (c_x2 = 'A') THEN
ret_var:= 'RL';
INSERT INTO Audit
(old_val, new_val, audit_event, id, pname)
VALUES
(p_x1, c_x2, 'RL', eid, 'PackageProcName');
ELSIF (p_x1 = 'A') AND (c_x2 = 'L') THEN
ret_var := 'GL';
INSERT INTO Audit
(old_val, new_val, audit_event, id, pname)
VALUES
(p_x1, c_x2, 'GL', eid, 'PackgProcName');
END IF;
RETURN ret_var;
i'm getting each row and performing
logic in C# and then inserting
If possible INSERT from the SELECT:
INSERT INTO YourNewTable
(col1, col2, col3)
SELECT
col1, col2, col3
FROM YourOldTable
WHERE ....
this will run significantly faster than a single query where you then loop over the result set and have an INSERT for each row.
EDIT as for the OP question edit:
you should be able to replace the function call to plain SQL in your query. Mimic the "initial" using a LEFT JOIN tableP, and the "RL" or "GL" can be calculated using CASE.
EDIT based on OP recent comments:
since you are loading data from Oracle into SQL Server, this is what I would do: most people that could help have moved on and will not read this question again, so open a new question where you say: 1) you need to load data from Oracle (version) to SQL Server Version 2) currently you are loading it from one query processing each row in C# and inserting it into SQL Server, and it is slow. and all the other details. There are much better ways of bulk loading data into SQL Server. As for this question, you could accept an answer, answer yourself where you explain you need to ask a new question, or just leave it unaccepted.
My recommendation is that you do not use functions and then call them within other SELECT statements. This:
SELECT t.id, ...
x1(t.id) ...
FROM TABLE t
...is equivalent to:
SELECT t.id, ...
(SELECT x.column FROM x1 x WHERE x.id = t.id)
FROM TABLE t
Encapsulation doesn't work in SQL like when using C#/etc. While the approach makes maintenance easier, performance suffers because sub selects will execute for every row returned.
A better approach would be to update the supporting function to include the join criteria (IE: "where x.id = t.id" for lack of real one) in the SELECT:
SELECT x.id
x.column
FROM x1 x
...so you can use it as a JOIN:
SELECT t.id, ...
x1.column
FROM TABLE t
JOIN (SELECT x.id,
x.column
FROM MY_PACKAGE.x) x1 ON x1.id = t.id
I prefer that to having to incorporate the function logic into the queries for sake of maintenance, but sometimes it can't be helped.
Personally I'd create an SSIS import to do this task. USing abulk insert you can imporve speed dramitcally and SSIS can handle the functions part after the bulk insert.
Firstly you need to find where the performance problem actually is. Then you can look at trying to solve it.
What is the performance of the view like? How long does it take the view to execute
without any of the function calls? Try running the command
How well does it perform? Does it take 1 minute or 1 hour?
create table the_view_table
as
select *
from the_view;
How well do the functions perform? According to the description you are making approximately 5 million function calls. They had better be pretty efficient! Also are the functions defined as deterministic. If the functions are defined using the deterministic keyword, the Oracle has a chance of optimizing away some of the calls.
Is there a way of reducing the number of function calls? The function are being called once the view has been evaluated and the million rows of data are available. BUT are all the input values from the highest level of the query? Can the function calls be imbeded into the view at a lower level. Consider the following two queries. Which would be quicker?
select
f.dim_id,
d.dim_col_1,
long_slow_function(d.dim_col_2) as dim_col_2
from large_fact_table f
join small_dim_table d on (f.dim_id = d.dim_id)
select
f.dim_id,
d.dim_col_1,
d.dim_col_2
from large_fact_table f
join (
select
dim_id,
dim_col_1,
long_slow_function(d.dim_col_2) as dim_col_2
from small_dim_table) d on (f.dim_id = d.dim_id)
Ideally the second query should run quicker as it calling the function fewer times.
The performance issue could be in any of these places and until you investigate the issue, it would be difficult to know where to direct your tuning efforts.
A couple of tips:
Don't load all records into RAM but process them one by one.
Try to run as many functions on the client as possible. Databases are really slow to execute user defined functions.
If you need to join two tables, it's sometimes possible to create two connections on the client. Fetch the data main data with connection 1 and the audit data with connection 2. Order the data for both connections in the same way so you can read single records from both connections and perform whatever you need on them.
If your functions always return the same result for the same input, use a computed column or a materialized view. The database will run the function once and save it in a table somewhere. That will make INSERT slow but SELECT quick.
Create a sorted intex on your table.
Introduction to SQL Server Indizes, other RDBMS are similar.
Edit since you edited your question:
Using a view is even more sub-optimal, especially when querying single rows from it. I think your "busines functions" are actually something like stored procedures?
As others suggested, in SQL always go set based. I assumed you already did that, hence my tip to start using indexing.

Categories

Resources