Serializing complex EF model over JSON - c#

I have done a lot of searching and experimenting and have been unable to find a workable resolution to this problem.
Environment/Tools
Visual Studio 2013
C#
Three tier web application:
Database tier: SQL Server 2012
Middle tier: Entity Framework 6.* using Database First, Web API 2.*
Presentation tier: MVC 5 w/Razor, Bootstrap, jQuery, etc.
Background
I am building a web application for a client that requires a strict three-tier architecture. Specifically, the presentation layer must perform all data access through a web service. The presentation layer cannot access a database directly. The application allows a small group of paid staff members to manage people, waiting lists, and the resources they are waiting for. Based on the requirements the data model/database design is entirely centered around the people (User table).
Problem
When the presentation layer requests something, say a Resource, it is related to at least one User, which in turn is related to some other table, say Roles, which are related to many more Users, which are related to many more Roles and other things. The point being that, when I query for just about anything EF wants to bring in almost the entire database.
Normally this would be okay because of EF's default lazy-load behavior, but when serializing just about any object to JSON for returning to the presentation layer, the Newtonsoft.Json serializer hangs for a long time then blows a stack error.
What I Have Tried
Here is what I have attempted so far:
Set Newtonsoft's JSON serialier ReferenceLoopHandling setting to Ignore. No luck. This is not cyclic graph issue, it is just the sheer volume of data that gets brought in (there are over 20,000 Users).
Clear/reset unneeded collections and set reference properties to null. This showed some promise, but I could not get around Entity Framework's desire to track everything.
Just setting nav properties to null/clear causes those changes to be saved back to the database on the next .SaveChanges() (NOTE: This is an assumption here, but seemed pretty sound. If anyone knows different, please speak up).
Detaching the entities causes EF to automatically clear ALL collections and set ALL reference properties to null, whether I wanted it to or not.
Using .AsNotTracking() on everything threw some exception about not allowing non-tracked entities to have navigation properties (I don't recall the exact details).
Use AutoMapper to make copies of the object graph, only including related objects I specify. This approach is basically working, but in the process of (I believe) performing the auto-mapping, all of the navigation properties are accessed, causing EF to query and resolve them. In one case this leads to almost 300,000 database calls during a single request to the web service.
What I am Looking For
In short, has anyone had to tackle this problem before and come up with a working and performant solution?
Lacking that, any pointers for at least where to look for how to handle this would be greatly appreciated.
Additional Note: It occurred to me as I wrote this that I could possibly combine the second and third items above. In other words, set/clear nav properties, then automap the graph to new objects, then detach everything so it won't get saved (or perhaps wrap it in a transaction and roll it back at the end). However, if there is a more elegant solution I would rather use that.
Thanks,
Dave

It is true that doing what you are asking for is very difficult and it's an architectural trap I see a lot of projects get stuck in.
Even if this problem were solveable, you'd basically end up just having a data layer which just wraps the database and destroys performance because you can't leverage SQL properly.
Instead, consider building your data access service in such a way that it returns meaningful objects containing meaningful data; that is, only the data required to perform a specific task outlined in the requirements documentation. It is true that an post is related to an account, which has many achievements, etc, etc. But usually all I want is the text and the name of the poster. And I don't want it for one post. I want it for each post in a page. Instead, write data services and methods which do things which are relevant to your application.
To clarify, it's the difference between returning a Page object containing a list of Posts which contain only a poster name and message and returning entire EF objects containing large amounts of irrelevant data such as IDs, auditing data such as creation time.
Consider the Twitter API. If it were implemented as above, performance would be abysmal with the amount of traffic Twitter gets. And most of the information returned (costing CPU time, disk activity, DB connections as they're held open longer, network bandwidth) would be completely irrelevant to what developers want to do.
Instead, the API exposes what would be useful to a developer looking to make a Twitter app. Get me the posts by this user. Get me the bio for this user. This is probably implemented as very nicely tuned SQL queries for someone as big as Twitter, but for a smaller client, EF is great as long as you don't attempt to defeat its performance features.
This additionally makes testing much easier as the smaller, more relevant data objects are far easier to mock.

For three tier applications, especially if you are going to expose your entities "raw" in services, I would recommend that you disable Lazy Load and Proxy generation in EF. Your alternative would be to use DTO's instead of entities, so that the web services are returning a model object tailored to the service instead of the entity (as suggested by jameswilddev)
Either way will work, and has a variety of trade-offs.
If you are using EF in a multi-tier environment, I would highly recommend Julia Lerman's DbContext book (I have no affiliation): http://www.amazon.com/Programming-Entity-Framework-Julia-Lerman-ebook/dp/B007ECU7IC
There is a chapter in the book dedicated to working with DbContext in multi-tier environments (you will see the same recommendations about Lazy Load and Proxy). It also talks about how to manage inserts and updates in a multi-tier environment.

i had such a project which was the stressful one .... and also i needed to load large amount of data and process them from different angles and pass it to complex dashboard for charts and tables.
my optimization was :
1-instead of using ef to load data i called old-school stored procedure (and for more optimization grouping stuff to reduce table as much as possible for charts. eg query returns a table that multiple charts datasets can be extracted from it)
2-more important ,instead of Newtonsoft's JSON i used fastJSON which performance was mentionable( it is really fast but not compatible with complex object. simple example may be view models that have list of models inside and may so on and on or )
better to read pros and cons of fastJSON before
https://www.codeproject.com/Articles/159450/fastJSON
3-in relational database design who is The prime suspect of this problem it might be good to create those tables which have raw data to process in (most probably for analytics) denormalized schema which save performance on querying data.
also be ware of using model class from EF designer from database for reading or selecting data especially when u want serialize it(some times i think separating same schema model to two section of identical classes/models for writing and reading data in such a way that the write models has benefit of virtual collections came from foreign key and read models ignore it...i am not sure for this).
NOTE: in case of very very huge data its better go deeper and set up in-memory table OLTP for the certain table contains facts or raw data how ever in that case your table acts as none relational table like noSQL.
NOTE: for example in mssql you can use benefits of sqlCLR which let you write scripts in c#,vb..etc and call them by t-sql in other words handle data processing from database level.
4-for interactive view which needs load data i think its better to consider which information might be processed in server side and which ones can be handled by client side(some times its better to query data from client-side ... how ever you should consider that those data in client side can be accessed by user) how ever it is situation-wise.
5-in case of large raw data table in view using datatables.min.js is a good idea and also every one suggest using serverside-paging on tables.
6- in case of importing and exporting data from big files oledb is a best choice i think.
how ever still i doubt them to be exact solutions. if any body have practical solutions please mention it ;) .

I have fiddled with a similar problem using EF model first, and found the following solution satisfying for "One to Many" relations:
Include "Foreign key properties" in the sub-entities and use this for later look-up.
Define the get/set modifiers of any "Navigation Properties" (sub-collections) in your EF entity to private.
This will give you an object not exposing the sub-collections, and you will only get the main properties serialized. This workaround will require some restructuring of your LINQ queries, asking directly from your table of SubItems with the foreign key property as your filtering option like this:
var myFitnessClubs = context.FitnessClubs
?.Where(f => f.FitnessClubChainID == myFitnessClubChain.ID);
Note 1:
You may off-cause choose to implement this solution partly, hence only affecting the sub-collections that you strongly do not want to serialize.
Note 2:
For "Many to Many" relations, at least one of the entities needs to have a public representation of the collection. Since the relation cannot be retrieved using a single ID property.

Related

Does setting serialization mode to Unidirectional have any drawback or performance hit?

I want to add a history feature for one of my application's entities. My application is using linq to sql as O/RM.
The solution I have selected for saving previous versions of entity is to serialize old version and save serialized versions somewhere.
Anyway, I was looking for the best way to serialize linq to sql entities. One of the best solutions I found was simply setting serialization mode to Unidirectional in DataContext object, and then use .net serializer. It seems that the method works well.
But, my question is, do this method have any side effect on data context or any performance hit? For example, does lazy loading of objects works intact after turning unidirectional serialization mode on?
I've searched a lot but found nothing.
And, do you think I should go another way to serialize objects? Any cheaper method?
PS 1: I don't like using DTOs, because I should maintain compatibility between DataContext object and DTOs.
PS 2: Please note, that the application I'm working on is relatively large and currently under heavy load. So I should be careful about changes in O/RM behavior or performance issues, because I could not review the entire application + online versions.
Thanks a lot :)
Re "and then use .net serializer", the intent of unidirectional mode is to support DataContractSerializer (specifically). The effect is basically just the addition of the [DataContract]/[DataMember] flags, but: based on experience, and despite your reservations, I would strongly advise you to go the DTO route:
ORMs, due to lazy loading of both sub-collections and properties (sometimes) makes it very hard to predict exacty what is going to happen
and at the same time, make it hard to monitor when data load is happening (N+1 in your serialization code can go un-noticed)
you have very little control over the size of the model (how many levels of descendants etc) are going to be serialized
and don't even get me started on parent navigation
the deserialized objects will not have a data-context, which means that their lazy loading won't work - this can lead to unexpected behaviours
you can get into all sorts of problems if you can't version your data model separately from your serialization model
there are some... "peculiarities" that impact LINQ-to-SQL and serialization (from memory, the most "fun" one is a difference in how the IList.Add and IList<T>.Add is implemented, meaning that one triggers data-change flags and one doesn't; nasty)
you are making everything dependent on your data model, which also means that you can never change your data model (at least, not easily)
a lot of scenarios that involve DTO are read-based, not write-based, so you might not even need the ORM overheads in the first place - for example, you might be able to replace some hot code-path pieces with lighter tools (dapper, etc) - you can't do that well if everything
is tied to the data model
I strongly advocate biting the bullet and creating a DTO model that matches what you want to serialize; the DTO model would be incredibly simple, and should only take a few minutes to throw together, including the code to map to/from your data model. This will give you total control over what is serialized, while also giving you freedom to iterate without headaches, and do things like use different serializers

Application DAL design

Hello and thanks for looking.
I have a DAL question for an application I'm working on. The app is going to extract some data from 5-6 tables from a production RDBMS that serves a much more critical role in the org. What the app has to do is use the data in these tables, analyze, apply some business logic/rules and then present.
The restrictions are that since the storage model is critical in nature to the org, I need to restrict how the app will request the data. Since the tables are relatively small, I created my data access to use DataTables to load the entirety of the db tables on a fixed interval using a timer.
My questions are really around my current design and the potential use of EF or LINQtoSQL
Can EF/LS work around the restrictions of the RDBMS. Most tutorials I've seen, the storage exists solely for the application. Can access to the storage be controlled and/or can EF use DataTables rather than An RDBMS?
Since the entirety of the tables are going to be loaded, is there a best practice for creating classes to consume the data within these tables? I will have to do in memory joins and querying/logic to get at the actual data I need.
Sorry if I'm being generic. I'm more just looking for thoughts and opinions as opposed to a solution to my problem. Please done hesitate to share your thoughts. Thanks.
For your first question, yes Entity Framework can use a existing DB as it's source, the term to search for when looking for Entity Framework tutorials on this topic is called "Database First"
For your second question let me first preface it with a warning: many ORMs are not designed around using it to load the entire data table and do bulk operations on them, especially if you will be modifying the result set and pushing the data back to the server in large quanties. The updates will be row based not set based because you did the modifications in C# code, not in a T-SQL query. Most ORMs are built around the expectation that you will be doing CRUD operations on the row level, not ETL operations or set level CRUD operations (except for Read which most ORMs will do as a set operation).
If you will not be updating the data, only pulling out using Entity Framework and building reports and whatnot off of the data you should be fine. If you are bulk inserting in to the database, things get more problematic. See this SO question for more information.

CQS queries - Auto generate ADO Mapping to View models?

I am currently working on an MVC 4 application. I am planning to implement a command query seperation pattern to enhance performance and the structure of the application. I am happy with my commands - which map my view models to my entities use then use nhibernate to save my data.
The commands and queries will be running off the same database.
I am a bit unsure of the best approach to manage my queries. In my last project I used Stored procedures for all of my reads/queries, then used automapper to map my IDataReaders to my ViewModels. This worked ok but the main problem was the turn around time of writing the stored procedures and also when the domain model changed the stored procedures got out of sync.
Therefore, ideally I would like something that auto generated the views or sprocs from my view models. But realistically, I cannot see a way of doing this. As the Sprocs/Views need some knowledge of potentially more that one table. So simply reflecting on the View model properties would not be enough.
I could auto generate a table for each view model, read this during development, then once the domain was stable and before we went to test convert these to views/sprocs?
So I guess what I am asking is:
Has anyone managed to solve the sproc/view auto generation problem I described above? (this would be my favourite outcome!) Or even better has designed a much more graceful solution!
Or is it more sensible to only implement raw ADO reads where they are absolutely necessary - i.e searches, and there dispense with the need for lots of sprocs/views.
But then still separate out my queries into a separate channel (but inside some of them they use NHibernate, whilst others use my ADO reader).
(p.s I have looked at the other stackoverflow CQS related questions and I hope mine is different enough to warrant this question)
What do stored procedures solve for you? Why can't you use NHibernate for reads too? Are the queries NHibernate produces that bad?
If performance of reads is crucial for you, and the shape of your viewmodels is very different from how you store your model - making the denormalization process to a viewmodel too heavy to do on the fly, you might have to consider completely splitting reads and writes.
When you write something, you can raise an event - often done asynchronously - on which subscribers listening can store data on the readside in such a way that it's optimal for reads (close to the shape of your viewmodel). This would make querying really fast.
Since a picture says more than a thousand words..
You can read a good introduction to CQRS here.

Database handling in applications

This is a bit of difficult question to ask but any feedback at all is welcome.
Ill start by the background, I am a university student studying software engineering last year we covered c# and I got myself a job working in a software house coding prototype software in c# (their main language is c++ using QT) after producing the prototype it was given to some clients which have all passed back positive feedback.
Now I am looking at the app and thinking well I could use this as a showcase with my CV esp as the clients who used the software have said that they will sign something to reference it.
So if I am going to do that then I had better get it right and do it to the best I possibly can. so I have started to look at it and think where I can improve it and one of the ways in which I think that I can is the way it handles the database connections and the data along with it.
the app itself runs along side a MySQL server and there is 6 different schemas which it gets its data from.
I have written a class (called it databaseHandler) which has the mysqlconnection in it ( one question was about if the connection should remain open the whole time the app is running, or open it fire a query then close it etc) inside this class I have written a method which takes some arguments and creates its query string which it then does the whole mysqlDataReader = cmd.executeReader(), this then returns the reader back to where ever it was called from.
After speaking to a friend he mentioned it might be nice if the method returned the raw data and not the reader, therefore keeping all the database "stuff" away from the main app.
After playing around I managed to find a couple of tutorials on putting the reader data into arrays and arraylists and passing then back, also had a go at passing back an array list of hashtables - these methods obv mean that the dev must know the column names in order to find the correct data.
then I stumbled across a page which went on about creating a Class which had the attributes of the column names and created a list which you could pull your data from:
http://zensoftware.org/archives/248 is the link
so this made me think, in order to use this method, would I need to create 6 classes with the attributes of the columns of my tables ( a couple of tables has up to 10-15 columns)? or is there a better way for me to handle my data?
I am not really clued up on these things but if pointed in the right direction I am a very fast learner :)
Again I thank you for any input what so ever.
Vade
You have a lot of ideas that are very close but are pretty common problems, but good that you are actively thinking about how to handle them!
On the question about leaving the connection open for the whole program or only having it open during the actual query time. The common (and proper) way to do this is only have the connection open as much as you need it, so
MySqlConnection cn = new MySqlConnection(yourConnectionString);
//Execute your queries
cn.close();
This is better since you don't risk leaving open connections, or having transaction issues typing up databases and resources.
With the having just the data returned and not the actual datareader this is a good idea but by just returning the data as an ArrayList or whatever you are kind of losing the structure of the data a little.
A good way to do this would be to either have your class just take the datareader to populate it's data OR have the Data Layer just return an instance of your class after reading the data.
I believe that it would be an excellent approach if your data access class returned a custom class populated with data from the database. That would be object-oriented. Instead of, say, returning a DataSet or an array containing customer information, you would create a Customer class with properties. Then, when you retrieve the data from the database, you populate an instance of the Customer class with the data, and return it to the calling code.
A lot of the newer Microsoft technologies are focusing on making this task easier. Quite often, there are many more than 6 classes needed, and writing all that code can seem like drudgery. I would suggest that, if you are interested in learning about those newer approaches, and possibly adapting them to your own code, you can check out Linq to SQL and Entity Framework.
one question was about if the connection should remain open the whole
time the app is running, or open it fire a query then close it etc
You want to keep the connection open as little as possible. Therefore you should open on each data request and close it as soon as you are done. You should also dispose it but if your database stuff is inside a C# using statement that happens automatically.
As far as the larger question on how to return the data to your application you are on the right track. You typically want to hide the raw database from the rest of your application and mapping the raw data to other intermediate classes is the correct thing to do.
Now, how you do this mapping is a very large topic. Ideally you don't want to create classes that map one to one your tables/columns but rather provide your app a more app-friendly representation of the data (e.g. business objects rather than database tables.) For example, if your employee data is split in to or three tables for normalization purposes you can hide this complexity and present the information as a single Employee class that binds the data from the other tables together.
Abstracting away your data access code using objects is known as Object/Relational mapping. It's actually a much more complex task than it appears at first sight. There are several libraries, even in the framework itself, that already do very well what you're trying to implement.
If your needs are very simple, look into typed DataSets. They let you create the table classes in a designer and also generate objects that will do the loading and saving for you (given certain limitations)
If your needs are less simple, but still pretty simple, I recommend you take a look at Linq To SQL to see if it covers your needs, as it does table-class mapping in a very straightforward way and uses a more modern usage pattern than DataSets.
There are also more complex ORMs that allow you to define more complex mappings, like Entity Framework or nHibernate, but very often their complexity is not necessary.
Details like connection lifetime will then depend on your specific needs. Sometimes it's best to keep the connection open, if you have a lot of queries caused by user interaction, like is usually the case with a desktop app. Other times it's best to keep them as short as possible to avoid congestion, like the case of web apps.
Whichever technology you choose will likely end up guiding you onto a good set of practices for it, and the best you can do is try things out and see which works best for you.

pluggable data store architectures

I have a pluggable system management tool. The architecture of this kind of thing is well understood (interfaces, publish/ subscribe, ....). How about the data store though. What do people do?
I need plugins to be able to add new entities, extend existing entities, establish new relationships, etc.
My thoughts (SQL), not necessarily well thought out
each plugin simply extends the schema when they are installed. In the old days changing the schema was a big no-no; now databases are very relaxed about this
plugins have their own tables. If 2 of them have an entity (say) person, then there are 2 tables p1_person and p2_person
plugins have their own database
invent some sort of flexible scheme where the tables are softly typed. Maybe many attributes packed into a single attribute. The ultimate is to have one big table called data, with key of table name & column name and a single data value.
Not SQL
object DB. I have no experience with these. Anybody care to pass on experience. db4o for example. Can I change the 'schema' of objects as the app evolves
NO-SQL
this is 'where its at' at the moment. Most of these seem to be aimed slightly differently than my needs. Anybody want to pass on experience with these
Apologies for the open ended question
My suggestion is go read about the entity framework
a lot of the situations you are describing can be solved (very elegantly) using table inheritance.
Your idea of one big table called data makes the hamsters in my computer cry ;)
The general trend is away from weakly typed schemas because they cannot be debugged at compile time. What you get from something like entity framework is a strongly typed extenislbe schema that you can code against using linq.
Object databases:
like you i havent played with them massivley - however the time when i was considering them was a time when there was no good ORM for .net and writing ado.net code was slowly killing me.
as for NO-SQL these are databases that meet a performance need. SQL performs badly in situations here there are lots of small writes occuring. I say badly tounge in cheek - it performs very well but when you scale to millions of concurrent users everything changes. My understanding of no sql is that it is a non rationalised format designed for lots of small fast writes and reads. The scale of sites that use these is usually very large.
OK - in response
I am currently lucky enough to be on a green field project so i am using EF to generate my schema.
On non greenfield projects I use sql scripts to update my table structures. As for implementing table inheritance in sql its very easy once you know the concept, its essentially a one to many relationship with a constraint that it will only ever be 0-1.
I wouldn't write .net code that updates the database structure ... that sounds like a disaster waiting to happen to me.
Beginning to think i have misunderstood what you are looking for. I find databases to be second nature as I have spent so long with them.
I haven't found a replacement for being meticulous about script management.

Categories

Resources