Retrieving large datasets using Fluent NHibernate - c#

I'm building a solution where I'm retrieving large amounts of data from the database(5k to 10k records). Our existing data access layer uses Fluent NHibernate but I'm "scared" that I will incur a large amount of overhead by hydrating object models that represent the database entities.
Can I retrieve simply an ADO dataset?

Yes you should be concerned about the performance of this. You can look at using the IStatelessSession functionality of NHibernate. However this probably won't give you the performance you are looking for. While I haven't used NH since 2.1.2GA, I would find it unlikely that they've substantially improved the performance of NH when it comes to bulk operations. To put it bluntly, NH just sucks (and most ORMs in general for that matter) when it comes to bulk operations.
Q: Can I retrieve simply an ADO dataset?
Of course you can. Just because you're using NHibernate doesn't mean you can't new up an ADO.NET connection and hit the database in the raw.
As much as I loathe data tables and data sets, this one of the rare cases you might want to consider using them instead of adding the overhead of mapping / creating the objects associated with your 10K rows of data.

Depending on how much performance you need, there are a few options. Nothing will ever beat using a sqldatareader, as that's what's underneath just about every .NET ORM implementation. In addition to being the fastest, it can take a lot less memory if you don't need to save a list of all the records after the query.
Now as for your performance worries, 5k-10k records isn't that high. I've pulled upwards of a million rows out of nhibernate before, but obviously the records weren't huge and it was for a single case. If you're doing this on a high traffic website then of course you will have to be more efficient if you're hitting bottlenecks. If you're thinking about datasets, I'd suggest instead trying Massive because it should still be more efficient than DataSet/Table and more convenient.

You can use "Scalar queries", which is in fact native SQL query returning a list of object[] (one object[] per row):
sess.CreateSQLQuery("SELECT * FROM CATS")
.AddScalar("ID", NHibernateUtil.Int64)
.AddScalar("NAME", NHibernateUtil.String)
.AddScalar("BIRTHDATE", NHibernateUtil.Date)
Example from NHibernate documentation: http://nhibernate.info/doc/nh/en/index.html#d0e10794

Related

How entity framework works for large number of records? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I see already a un-answered question here on.
My question is -
Is EF really production ready for large application?
The question originated from these underlying questions -
EF pulls all the records into memory then performs the query
operation. How EF would behave when table has around ~1000 records?
For simple edit I have to pull the record edit it and
then push to db using SaveChanges()
I faced a similar situation where we had a large database with many tables 7- 10 million records each. we used Entity framework to display the data. To get nice performance here's what I learned; My 10 Golden rules for Entity Framework:
Understand that call to database made only when the actual records are required. all the operations are just used to make the query (SQL) so try to fetch only a piece of data rather then requesting a large number of records. Trim the fetch size as much as possible
Yes, (In some cases stored procedures are a better choice, they are not that evil as some make you believe), you should use stored procedures where necessary. Import them into your model and have function imports for them. You can also call them directly ExecuteStoreCommand(), ExecuteStoreQuery<>(). Same goes for functions and views but EF has a really odd way of calling functions "SELECT dbo.blah(#id)".
EF performs slower when it has to populate an Entity with deep hierarchy. be extremely careful with entities with deep hierarchy
Sometimes when you are requesting records and you are not required to modify them you should tell EF not to watch the property changes (AutoDetectChanges). that way record retrieval is much faster
Indexing of database is good but in case of EF it becomes very important. The columns you use for retrieval and sorting should be properly indexed.
When you model is large, VS2010/VS2012 Model designer gets real crazy. so break your model into medium sized models. There is a limitation that the Entities from different models cannot be shared even though they may be pointing to the same table in the database.
When you have to make changes in the same entity at different places, use the same entity, make changes and save it only once. The point is to AVOID retrieving the same record, make changes & save it multiple times. (Real performance gain tip).
When you need the info in only one or two columns try not to fetch the full entity. you can either execute your sql directly or have a mini entity something. You may need to cache some frequently used data in your application also.
Transactions are slow. be careful with them.
SQL Profiler or any query profiler is your friend. Run it when developing your application to see what does EF sends to database. When you perform a join using LINQ or Lambda expression in ur application, EF usually generates a Select-Where-In-Select style query which may not always perform well. If u find any such case, roll up ur sleeves, perform the join on DB and have EF retrieve results. (I forgot this one, the most important one!)
if you keep these things in mind EF should give almost similar performance as plain ADO.NET if not the same.
1. EF pulls all the records into memory then performs the query operation. How EF would behave when table has around ~1000 records?
That's not true! EF fetches only necessary records and queries are transformed into proper SQL statements. EF can cache objects locally within DataContext (and track all changes made to entities), but as long as you follow the rule to keep context open only when needed, it won't be a problem.
2. For simple edit I have to pull the record edit it and then push to db using SaveChanges()
It's true, but I would not bother in doing that unless you really see the performance problems. Because 1. is not true, you'll only get one record from DB fetched before it's saved. You can bypass that, by creating the SQL query as a string and sending it as a plain string.
EF translates your LINQ query into an SQL query, so it doesn't pull all records into memory. The generated SQL might not always be the most efficient, but a thousand records won't be a problem at all.
Yes, that's one way of doing it (assuming you only want to edit one record). If you are changing several records, you can get them all using one query and SaveChanges() will persist all of those changes.
EF is not a bad ORM framework. It is a different one with its own characteristics. Compare Microsoft Entity Framework 6, against say NetTiers which is powered by Microsoft Enterprise Library 6.
These are two entirely different beasts. The accepted answer is really good because it goes through the nuances of EF6. Whats key to understand is that each ORM has its own strengths and weaknesses. Compare the project requirements and its data access patterns against the ORM's behavior patterns.
For Example: NetTiers will always give you higher raw performance than EF6. However that is primarily because it is not a point and click ORM and as part and parcel of generating the ORM you will be optimizing your data model, adding custom stored procedures where relevant, etc... if you engineered your data model with the same effort for EF6 you could probably get close to the same performance.
Also consider can you modify the ORM? for example with NetTiers you can add extensions to the codesmith templates to include your own design patterns over and above what is generated by the base ORM library.
Also consider EF6 makes significant use of reflection whereas NetTiers or any library powered by Microsoft Enterprise Library will make heavy use of Generics instead. These are two entirely different approaches. Why so? Because EF6 is based on dynamic reflection whereas NetTiers is based on static reflection. Which is faster and which is better entirely depends on the usage patterns that will be required of the ORM.
Sometimes a hybrid approach works better: Consider for example EF6 for Web API OData endpoints, A few large tables wrapped with NetTiers & Microsoft Enterprise Library with custom stored procedures, and a few large masterdata tables wrapped with a custom built write through object cache where on initial load the record set is streamed into the memory cache using an ADO data reader.
These are all different and they all have their best fit scenarios: EF6, NetTiers, NHibernate, Wilson OR Mapper, XPO from Dev Express, etc...
There is no simple answer for your question. The main thing is about what you want to do with your data? And do you need so much data at one time?
EF translated your Queries to SQL so at this time there is no Object in Memory. When you get the data, then the selected records are in memory. If you are selecting a large amount of large objects then it can be a performance killer if you need to manipulate them all.
If you don't need to manipulate them all you can disable change tracking and enable it later for single objects you need to manipulate.
So you see it depends on your type of application.
If you need to manipulate a mass of data efficient, then don't use a OR-Mapper!
Otherwise EF is fine, but consider how many objects you really need at one time and what you want to do with them.

sqlite alternative for huge data lists?

I'm currently using sqlite embedded to store relatively big lists of data (starting from 100'000 rows per table). Queries include only:
paging
sorting by a field
Amount of data in a row is relatively small. Performance is really bad, especially for the first query, which is critical for my application. All kinds of tunings and pre-caching already tried and reached the practical limit.
Is there any alternative of an embedded data store library which can do these simple queries in a very fast and efficient way? Theres no requirement for it to support sql at all.
If it is (predominantly) read-only, consider using memory mapped views of a file.
It will be possible to achieve maximum performance rolling your own indexes.
Obviously it will be also be the most work-intensive and error-prone to roll-your-own.
May I suggest a traditional RDBMS with good indexes or perhaps a newfangled no-SQL style DB that supports your work-load?
You can try lucene.net, it is blazing fast, does not require any installation, supports paging and sorting by fields and much much more.
http://incubator.apache.org/lucene.net/
With Simple Lucene wrapper it is also quite easy to use: http://blogs.planetcloud.co.uk/mygreatdiscovery/post/SimpleLucene-e28093-Lucenenet-made-easy.aspx

Cache data from DB using EF 4

I have an application which needs to keep data from DB in memory.
There are 5-6 tables with very few rows and the tables are updated very rarely and as application needs this data very frequently I would like to avoid all time requesting the DB on each action.
I am using Entity Framework 4 (linq to entities) and it sends request each time quering. I know it is possible to avoid that using ToList or so ... but I need info from those 6 tables and queries apply joins.
What would be the better solution.
The purpose of the query is to be executed. You can check EF Caching Wrapper if it solves the problem but I don't think so. Caching provider caches actual query so it is enough to change where condition and it is considered as another query.
This should be done by loading your data into custom data structures (lists) and using Linq-to-objects on them.
If you are joining that data to other data which is not candidate for caching, I would suggest looking at your database engine features. Most advanced SQL databases, will place those tables in RAM already. You already will be incurring in network latency overhead when you issue the query for the non-cached data. And the database already will already have an index in RAM as well. Unless you are talking about big rows like an image or similar. You would just be moving a small amount of processing from one place to the next. Plus in order to be as efficient as the SQL database, not only do you need to find how to cache, but also cache an index and write code to use and maintain it as well.
Still, in some use cases it would be very useful thing to do.

Is NHibernate faster than a classic ODBC Driver?

I am working on an application with a kinda simple data model (4 tables including two small, having around 10 rows, and two bigger, having hundreds of rows).
I'm working with C# and currently use an OdbcDriver for my Data Access Layer.
I was wondering if there is any difference in terms of performance between this driver or NHibernate?
The application works but I'd like to know if installing NHibernate instead of a classic OdbcDriver would make it faster? If so, is the difference really worth installing NHibernate? (according to the fact that I have never used such technology)
Thanks!
Short answer: no, NHibernate will actually slow your performance in most cases.
Longer answer: NHibernate uses the basic ADO.NET drivers, including OdbcConnection (if there's nothing better), to perform the actual SQL queries. On top of that, it is using no small amount of reflection to digest queries into SQL, and to turn SQL results into lists of objects. This extra layer, as flexible and powerful as it is, is going to perform more slowly than a hard-coded "firehose" solution based on a DataReader.
Where NHibernate may get you the APPEARANCE of faster performance is in "lazy-loading". Say you have a list of People, who each have a list of PhoneNumbers. You are retrieving People from the database, just to get their names. A naive DataReader-based implementation may involve calling a stored procedure for the People that includes a join to their PhoneNumbers, which you don't need in this case. Instead, NHibernate will retrieve only People, and set a "proxy" into the reference to the list of PhoneNumbers; when the list needs to be evaluated, the proxy object will perform another call. If the phone number is never needed, the proxy is never evaluated, saving you the trouble of pulling phone numbers you don't need.
NHibernate isn't about making it faster and it'll alwasy be slower than just using the database primatives like you are (it uses them "under the hood").
In my opinion NHibernate about making a reusable entity layer that can be applied to different applications or at the very least reused in multiple areas in one medium to large application. Therefore moving your application to NHibernate would be a waste of time (it sounds very small).
You might get better performance by using a specific datbase driver for your database engine.
For amount of data in your database it won't make any difference. But in general using NHibernate will slow down application performance, but increase development speed. But this is generally true for all ORM's.
SOme hint: NHIbernate is not magic. It sits on top of ADO.NET. Want a faster driver? GET ONE. Why are yo using a slow outdated technilogy like ODbc anyway? WHat is your data source? Don't they support ANY newer standard like OLEDB?

Virtual Database in Memory

Imagine the following:
I have a table of 57,000 items that i regularly use in my application to figure out things like targeting groups etc.
instead of querying the database 300,000 times a day, for a table that hardly ever changes it's data, is there a way to store its information in my application and poll data in memory directly? Or, do I have to create some sort of custom datatype for each row and iterate through testing each row, to check for the results i want?
After some googling, the closest thing i could find is in-memory database
thank you,
- theo
SQLite supports in-memory tables.
For 57,000 items that you will be querying against (and want immediately available) I would not recommend implementing just simple caching. For that many items I'd either recommend a distributed memory cache (even if it's only one machine) such as Memcache, Velocity etc or to go with your initial idea of using an in memory database.
Also if you use any full fledged ORM such as NHibernate you can implement it to use clients for the distributed caching tools with almost no work. Many of the major clients have NHibernate implementations for them including Memcache, Velocity and some others. This might be a better solution as you can have it where it's only caching data it truly is using and not all the data it might need.
Read up on Caching
It sounds like this is application level data rather than user level, so you should look into "Caching Application Data"
Here are some samples of caching datatables
If you only need to find rows using the same key all the time, a simple Dictionary<Key, Value> could very well be all that you need. To me, 57,000 items doesn't really sound that much unless each row contains a huge amount of data. However, if you need to search by different columns, an in-memory database is most likely the way to go.

Categories

Resources