Entity Framework 4.1 Performance issues - c#

I am currently using Entity Framework in an ASP.NET MVC 3 project.
And I am getting severe performance issues while looping through the records in the view to display them.
The data is being recieved quickly, so I know it's not the connection to our remote oracle server and there are no lazy loaded relationships in the model I'm using, yet each record is taking 120-300ms to process a simple 3 field output with an action link.
Currently it takes over 3 minutes to load the page with 800ish records.
I've tried tweaking with configuration options but none seem to help.
Anyone has any ideas?
edit: controller code
readonly OracleSampleManagerContext db = new OracleSampleManagerContext();
public virtual ActionResult Index()
{
var spList = db.SamplePoints.OrderBy(e=>e.Id).ToList();
return View(MVC.Reports.Views.SamplePointList, spList);
}
<h2>
Selection By Sample Point
</h2>
<table>
#foreach (var sp in Model)
{
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
sw.Start();
<tr>
<td>#Html.ActionLink(sp.Id, MVC.Reports.Results(sp.Id))</td>
<td>#sp.Description</td>
<td>#sp.PointLocation</td>
<td>#sw.ElapsedMilliseconds</td>
</tr>
sw.Stop();
sw.Reset();
}
</table>
Example:
0200 72" Sewer to river - Once through cooling water OUTFALLS 346ms
0400 66" Sewer to river - Combined effluent OUTFALLS 347ms
0500 54" Sewer to river - Once through cooling water OUTFALLS 388ms
06-AI-18 TBA in Water IB2 228ms
06-AI-31 TBA in Water IB2 172ms

My guess is that MVC.Reports.Results(sp.Id) does some sort of db lookup, and since you converted your model to a list before sending it to the view, you now have to hit the database again for each record Making a page of 800 records require 801 seperate trips to the database instead of one.
Remove the ToList() from your first query.
Refactor MVC.Reports.Results(sp.Id) to take an object instead of an int, and within that method work on the object directly.
Both of the above may require you to move the scope of your entities context from within the action out to the Controller.

Related

Revit API. Get a room from the linked model in which the main model element is located

I have a quite simple but at the same time challenging problem with Revit Api.
There is a Main Revit-MEP model and Linked architectural one. I want to know which room my MEP-elements belong. For this purpose, I have tried two ways:
Use ElementIntersectsFilter for link model (There is an article on Jeremy blog) – it doesn’t work when link rotated or displaced.
Import Solid geometry and transform solid, then use ElementIntersectsSolidFilter – it works, but take an enormous amount of time. For example, my Main model has at about 35000 elements and Linked 1100 rooms. The 95% of time spent for pass throw ElementIntersectsSolidFilter. For one room 30 sec avg which means hang on a system on 9 hours!
Filter all target elements from MainModel (~35000 elems)
ICollection<ElementId> fec = new FilteredElementCollector(doc)
.WhereElementIsNotElementType()
.WherePasses(new ElementMulticategoryFilter(bic))
.ToElementIds()
.ToList();
Get all Rooms from link and retrieve solids (~1100 elems)
IEnumerable<Room> rooms = new FilteredElementCollector(link.GetLinkDocument())
.WhereElementIsNotElementType()
.OfCategory(BuiltInCategory.OST_Rooms)
.Cast<Room>();
RoomInfo holds solid and other additional information about Room
rlf – do all retrieving work
IEnumerable<RoomInfo> ifs = rlf.GetItemInfos(rooms).Cast<RoomInfo>();
A method that takes solid and the reference to collection of target elements.
Return all elements intersect solid and remove them from target collection (~35000) so the collection is rising down with every iteration.
Public ICollection<ElementId> GetIntersectedElements(Solid solid, ref ICollection<ElementId> eIds)
{
if (!eIds.Any())
{
log.Info($"Input collection is empty. Task done.");
return new List<ElementId>();
}
var solidFilter = new ElementIntersectsSolidFilter(solid);
var fec = new FilteredElementCollector(doc, eIds)
// This filter eat time
.WherePasses(solidFilter)
.ToElementIds();
if (fec.Any())
{
eIds = new FilteredElementCollector(doc, eIds)
.Excluding(fec)
.ToElementIds();
}
return fec;
}
I will be appreciated for any ideas how to do this in a reasonable amount of time
You can solve this more simply and effectively. You need to figure out the transformation between the model main MEP model and linked architectural one. Next determine the location P of your family instance or MEP element or whatever. Transform P from the MEP model to the linked architectural model. In the architectural model, figure out what room or space contains the transformed point.
A recent thread in the Revit API discussion forum handles a different topic and yet illustrates almost all the principles required: How to calculate the column finish area of room.

Jquery Datatables, pagination and mvc. Take x amount from database

Having abit of an issue with datatables, I want all the data from my sql table, however running it on our site is causing some severe lag as it has over 8000 records. So I tried using Take(10).ToList(); in my controller, however jquery datatables will only populate with 10 records (obviously). I am hoping there is a simple method or approach in my controller that I could take, for only loading ten records at a time. Yet still keeping the pagination of the entire sql table in the datatables. (Long shot)
For instance if I load the entire table data into the datatable it has something like 800+ paged pages. If I only take 10 records at a time datatables will only show one page. I need to take 10 records at a time but also show 800 paged pages and when I click next or on a specific paged page it will load those records. Can this be done from the controller/linq? Without taking an exhaustively long trip down json/ajax lane.
Controller
public PartialViewResult Listing()
{
var model = _db.MyDataBase.Take(10).ToList();
View:
<table id="example" class="table table-striped table-bordered table-hover">
<tbody>
#foreach (var p in Model)
{
<tr class="gridRow" data-id="#p.MyId">
<td>#p.Id</td>
// etc....
Script:
<script type="text/javascript">
$(function () {
// Initialize Example
$('#example').dataTable();
});
</script>
I Suggest you go through server side processing of datatables.net . Refer Datatables.net
Just tweak your method to receive 2 parameters: pageNumber pageResults
Than adding Skip to jump through the results you don't want to show, and using Take to start taking results from the point Skip skipped over.
Usage:
Listing(1, 20) - Will give us results 0-20
Listing(3, 20) - Will give us results 60-80
Listing(5, 10) - Will give us results 50-60
Code:
public PartialViewResult Listing(int pageNumber, int pageResults)
{
var model = _db.MyDataBase
.OrderBy(row => row.ID)
.Skip((pageNumber -1) * pageResults)
.Take(pageResults)
.ToList();
}

Best way to display profiles results in MVC in a paged way

I'm hesitating and thinking about how to display the result searches.
The situation is following:
I have a website, where a user can fill in few texboxes and based on that perform a search. And then i would like to display him the results in a paged way: something like:
photo, name, age | photo, name, age | photo, name age
So i would like to have a 3 columns, and then probably 5 rows.
But I don't know, what is the best way to represent something like this, is there a best approach etc?
Thanks in advance
Well, there's so many approaches you can go with from many different perspectives. We could really be talking about it for ages. But, in a nutshell, you can implement paging at the presentation layer, at the controller level, at the business logic layer or at the data layer. Basically, the lower you go down your application layer the better performance you'll get, there's no much difference between implementing paging at the controller and the business logic layer when it comes to performance but design wise it's better to keep these concerns at the business logic layer for better maintainability and scalability. You will get much better performance if paging is implemented at the data layer, specially if you have a lot of data to display and then tell the data access layer to fetch just the page of data your application is interested in. Using other approaches will force you to retrieve all data from the backend store which might result in unnecessary bandwidth consumption and data transfers. See this example below where I'm doing the paging at the data layer level..
Example
SQL Query
I'm using a custom data access layer that runs the sql query below to query a large amount of devices...
select top (#pagesize) *
from
(
select row_number() over(order by d.Name) as RowId
, d.Id
, d.Name
, d.IsDeleted
, dt.Id as DeviceTypeId
, dt.Name as DeviceTypeName
from Devices d
left join DeviceTypes dt on dt.Id = d.DeviceTypeId
)
as o
where o.RowId <= (#page * #pagesize) and o.RowId > ((#page - 1) * #pagesize)
The query is simple you specify the page size(records per data page) and the data page to retrieve (first page, the second page, etc.). This query is run by my data access layer which constructs business ojects and pass them to the Business Logic Layer
Business Logic
Then, in the my business logic I called the devices data object to retrieve the data page I'm interested in. The following method does that job...
public IList<DeviceBO> GetAllDevices(int page, out int count)
{
count = DataProvider.Devices.GetAllDevicesCount();
return DataProvider.Devices.GetAllDevices(page, BusinessConstants.GRID_PAGE_SIZE);
}
Very straight-forward operation. Yet, there's a few things to notice. One, that I'm making two round-trips to the database; one to get the count of all devices and the second one to retrieve a data page of devices. We can argue that this could be avoided which is true by doing a single trip to the database, but in my case it is a very quick query that runs in less than 1 second and believe me...I have over a million records. The call to the GetAllDevicesCount does nothing more than a sql select count(Id) from dbo.Devices
Also notice that DataProvider is just a factory object. You don't really need to worry about the implementation details. And BusinessConstants.GRID_PAGE_SIZE simple returns a number, the standard page size used across my application (I've got a few grids/tables) and it's just a way to keep things in one place in case I want to change the page size across all grids/tables later on.
Controller
Then I have defined a device controller that exposes the following action method (Manage)...
public class DevicesController : Controller
{
public ActionResult Manage(string currentPage)
{
return process_device_list(currentPage);
}
//this method was created as a result of code re-factoring since it is used in several other action methods not shown here
private ActionResult process_device_list(string currentPage, bool redirect = false)
{
int count = 0;
int page = 1;
if (!int.TryParse(currentPage, out page))
{
if (!string.IsNullOrEmpty(currentPage))
return RedirectToAction("Manage");
page = 1;
}
var model = new DeviceManagementListModel();
model.Devices = BusinessFactory.DevicesLogic.GetAllDevices(page, out count);
model.ActualCount = count;
model.CurrentPage = page;
if (!redirect)
return View(model);
else
return RedirectToAction("Manage", new { #currentPage = currentPage });
}
}
View
The view is pretty much just HTML and razor syntax, there's nothing interesting going on there. Perhaps, the footer of the table/grid is where more interesting things happen since the paging markup is define there....
<tfoot>
<tr>
<td colspan="4">
<div class="pager">
#using (Html.BeginForm("Manage", "Devices", FormMethod.Get)){
#Html.TextBoxFor(x => x.CurrentPage, new { title = "Enter a page number to change the page results" })
<input type="submit" value="Go" title="Change the current page result" />
}
</div>
<div class="total">
#Model.ActualCount record(s) found
</div>
</td>
</tr>
</tfoot>
And this is how it looks...
Notice in the tfoot section of the table in my view that I have added a form element that makes a GET request to my action method Manage rather than a POST request. This is useful to provide a nice restful url where users can specified manually the page they're interested in...
I hope it helps you get some basic ideas

Poor performance when loading child entities with Entity Framework

I'm building an ASP.Net application with Entity Framework (Code First) and I've implemented a Repository pattern like the one in this example.
I only have two tables in my database. One called Sensor and one called MeasurePoint (containing only TimeStamp and Value). A sensor can have multiple measure points. At the moment I have 5 sensors and around 15000 measure points (approximately 3000 points for each sensor).
In one of my MVC controllers I execute the following line (to get the most recent MeasurePoint for a Sensor)
DbSet<Sensor> dbSet = context.Set<Sensor>();
var sensor = dbSet.Find(sensorId);
var point = sensor.MeasurePoints.OrderByDescending(measurePoint => measurePoint.TimeStamp).First();
This call takes ~1s to execute which feels like a lot to me. The call results in the following SQL query
SELECT
[Extent1].[MeasurePointId] AS [MeasurePointId],
[Extent1].[Value] AS [Value],
[Extent1].[TimeStamp] AS [TimeStamp],
[Extent1].[Sensor_SensorId] AS [Sensor_SensorId]
FROM [dbo].[MeasurePoint] AS [Extent1]
WHERE ([Extent1].[Sensor_SensorId] IS NOT NULL) AND ([Extent1].[Sensor_SensorId] = #EntityKeyValue1)
Which only takes ~200ms to execute, so the time is spent somewhere else.
I've profiled the code with the help of Visual Studio Profiler and found that the call that causes the delay is
System.Data.Objects.Internal.LazyLoadBehavior.<>c_DisplayClass7`2.<GetInterceptorDelegate>b_1(!0,!1)
So I guess it has something to do with lazy loading. Do I have to live with performance like this or are there improvements I can make? Is it the ordering by time that causes the performance drop, if so, what options do I have?
Update:
I've updated the code to show where sensor comes from.
What that will do is load the entire children collection into memory and then perform the .First() linq query against the loaded (appx 3000) children.
If you just want the most recent, use this instead:
context.MeasurePoints.OrderByDescending(measurePoint => measurePoint.TimeStamp).First();
If that's the query it's running, it's loading all 3000 points into memory for the sensor. Try running the query directly on your DbContext instead of using the navigation property and see what the performance difference is. Your overhead may be coming from the 2999 points you don't need being loaded.

Why is Entity Framework taking 30 seconds to load records when the generated query only takes 1/2 of a second?

The executeTime below is 30 seconds the first time, and 25 seconds the next time I execute the same set of code. When watching in SQL Profiler, I immediately see a login, then it just sits there for about 30 seconds. Then as soon as the select statement is run, the app finishes the ToList command. When I run the generated query from Management Studio, the database query only takes about 400ms. It returns 14 rows and 350 columns. It looks like time it takes transforming the database results to the entities is so small it is not noticable.
So what is happening in the 30 seconds before the database call is made?
If entity framework is this slow, it is not possible for us to use it. Is there something I am doing wrong or something I can change to speed this up dramatically?
UPDATE:
All right, if I use a Compiled query, the first time it take 30 seconds, and the second time it takes 1/4th of a second. Is there anything I can do to speed up the first call?
using (EntitiesContext context = new EntitiesContext())
{
Stopwatch sw = new Stopwatch();
sw.Start();
var groupQuery = (from g in context.Groups.Include("DealContract")
.Include("DealContract.Contracts")
.Include("DealContract.Contracts.AdvertiserAccountType1")
.Include("DealContract.Contracts.ContractItemDetails")
.Include("DealContract.Contracts.Brands")
.Include("DealContract.Contracts.Agencies")
.Include("DealContract.Contracts.AdvertiserAccountType2")
.Include("DealContract.Contracts.ContractProductLinks.Products")
.Include("DealContract.Contracts.ContractPersonnelLinks")
.Include("DealContract.Contracts.ContractSpotOrderTypes")
.Include("DealContract.Contracts.Advertisers")
where g.GroupKey == 6
select g).OfType<Deal>();
sw.Stop();
var queryTime = sw.Elapsed;
sw.Reset();
sw.Start();
var groups = groupQuery.ToList();
sw.Stop();
var executeTime = sw.Elapsed;
}
I had this exact same problem, my query was taking 40 seconds.
I found the problem was with the .Include("table_name") functions. The more of these I had, the worse it was. Instead I changed my code to Lazy Load all the data I needed right after the query, this knocked the total time down to about 1.5 seconds from 40 seconds. As far as I know, this accomplishes the exact same thing.
So for your code it would be something like this:
var groupQuery = (from g in context.Groups
where g.GroupKey == 6
select g).OfType<Deal>();
var groups = groupQuery.ToList();
foreach (var g in groups)
{
// Assuming Dealcontract is an Object, not a Collection of Objects
g.DealContractReference.Load();
if (g.DealContract != null)
{
foreach (var d in g.DealContract)
{
// If the Reference is to a collection, you can just to a Straight ".Load"
// if it is an object, you call ".Load" on the refence instead like with "g.DealContractReference" above
d.Contracts.Load();
foreach (var c in d.Contracts)
{
c.AdvertiserAccountType1Reference.Load();
// etc....
}
}
}
}
Incidentally, if you were to add this line of code above the query in your current code, it would knock the time down to about 4-5 seconds (still too ling in my option) From what I understand, the MergeOption.NoTracking option disables a lot of the tracking overhead for updating and inserting stuff back into the database:
context.groups.MergeOption = MergeOption.NoTracking;
It is because of the Include. My guess is that you are eager loading a lot of objects into the memory. It takes much time to build the c# objects that corresponds to your db entities.
My recommendation for you is to try to lazy load only the data you need.
The only way to make the initial compilation of the query faster that I know of is to make the query less complex. The MSDN documentation on performance considerations for the Entity Framework and Compiled Queries don't indicate that there is any way to save a compiled query for use in a different application execution session.
I would add that we have found that having lots of Includes can make query execution slower than having fewer Includes and doing more Loads on related entities later. Some trial and error is required to find the right medium.
However, I have to ask if you really need every property of every entity you are including here. It seems to me that there is a large number of different entity types in this query, so materializing them could well be quite expensive. If you are just trying to get tabular results which you don't intend to update, projecting the (relatively) fewer number of fields that you actually need into a flat, anonymous type should be significantly faster for various reasons. Also, this frees you from having to worry about eager loading, calling Load/IsLoaded, etc.
You can certainly speed up the initial view generation by precompiling the entity views. There is documentation on MSDN for this. But since you pay that cost at the time the first query is executed, your test with a simple query shows that this is running in the neighborhood of 2 seconds for you. It's nice to say that 2 seconds, but it won't save anything else.
EF takes a while to start up. It needs build metadata from xml and probably generates objects used for mapping.
So it takes a few sec to start up, i don't think there is a way to get around that, except never restarting your program.

Categories

Resources