I am having an issue with an internal site during testing ( 50 + users).
the pages work fine with 1 or 2 users but when a bunch of peoples hit the site, I get errors for a lot of my data bindings "System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name ".
all these property names exist in the results I return from the database. but for some reason it happens with a lot of users at the same time.
I am using asp.net 4.0 and WCF.
The pages use data repeaters to bind data. i aslo checked the database and the responses from the database server are good, no issues there, so its purely application problem.
any help is much appreciated.
It seem there is a performance issue.
You can:
1.Use the simpler data source
2.Use output caching or partial caching
3.use data caching in your business logic layer ( like asp.net internal cache or Application Block Cache Helper , ... )
4.Review generated SQL by ORM(Entity Framework) and optimize it.
Use a really basic databinder, like SQLDataReader. It seems to me that you have some resource issues. And ensure you close all your connection strings.
Related
I'm a beginner with ASP.NET and webapplications in general.
For a project I have to interact with an enginnering software to read some data, for this I have to to use a ASP.NET project based on the .Net Framework 4.8.
For now I called these functions with buttons and displayed the data in gridviews. The problem is I want to show the data on all clients and the data should still be there when I refresh the page on one client.
To load some data in the gridview I tested it by using a function like this.
Load data to datagrid
The problem is I can't see these changes on other clients.
Is there a way to implement this?
Well, running some in-memmory code for one user of course will not work for other users. You probably would be best to write a seperate console application, place it on the server, and then say schedule it to run ever 5 minuites or whatever. That console or desktop program would thus then write out the data to a database table. Now, any and all web pages (and users) can now have a grid display that quieres against that database.
The other possbile (if for some strange reason you wanted to avoid a database to persist and store this information?
You could consider using signalR. It is complex, but allows you to push (send) out information to all web clients connected to your system. This involves some rather fancy foot work, and does require your web page to include some JavaScript. As a result, the simple database idea is less work, simple, and does not require many hand-stands.
but, signalR is what is used for say pushing out information to each client browser, you can get started here, since this is a vast topic well beyond that of a simple Q & A on SO.
https://learn.microsoft.com/en-us/aspnet/signalr/overview/getting-started/
I have done a lot of searching and experimenting and have been unable to find a workable resolution to this problem.
Environment/Tools
Visual Studio 2013
C#
Three tier web application:
Database tier: SQL Server 2012
Middle tier: Entity Framework 6.* using Database First, Web API 2.*
Presentation tier: MVC 5 w/Razor, Bootstrap, jQuery, etc.
Background
I am building a web application for a client that requires a strict three-tier architecture. Specifically, the presentation layer must perform all data access through a web service. The presentation layer cannot access a database directly. The application allows a small group of paid staff members to manage people, waiting lists, and the resources they are waiting for. Based on the requirements the data model/database design is entirely centered around the people (User table).
Problem
When the presentation layer requests something, say a Resource, it is related to at least one User, which in turn is related to some other table, say Roles, which are related to many more Users, which are related to many more Roles and other things. The point being that, when I query for just about anything EF wants to bring in almost the entire database.
Normally this would be okay because of EF's default lazy-load behavior, but when serializing just about any object to JSON for returning to the presentation layer, the Newtonsoft.Json serializer hangs for a long time then blows a stack error.
What I Have Tried
Here is what I have attempted so far:
Set Newtonsoft's JSON serialier ReferenceLoopHandling setting to Ignore. No luck. This is not cyclic graph issue, it is just the sheer volume of data that gets brought in (there are over 20,000 Users).
Clear/reset unneeded collections and set reference properties to null. This showed some promise, but I could not get around Entity Framework's desire to track everything.
Just setting nav properties to null/clear causes those changes to be saved back to the database on the next .SaveChanges() (NOTE: This is an assumption here, but seemed pretty sound. If anyone knows different, please speak up).
Detaching the entities causes EF to automatically clear ALL collections and set ALL reference properties to null, whether I wanted it to or not.
Using .AsNotTracking() on everything threw some exception about not allowing non-tracked entities to have navigation properties (I don't recall the exact details).
Use AutoMapper to make copies of the object graph, only including related objects I specify. This approach is basically working, but in the process of (I believe) performing the auto-mapping, all of the navigation properties are accessed, causing EF to query and resolve them. In one case this leads to almost 300,000 database calls during a single request to the web service.
What I am Looking For
In short, has anyone had to tackle this problem before and come up with a working and performant solution?
Lacking that, any pointers for at least where to look for how to handle this would be greatly appreciated.
Additional Note: It occurred to me as I wrote this that I could possibly combine the second and third items above. In other words, set/clear nav properties, then automap the graph to new objects, then detach everything so it won't get saved (or perhaps wrap it in a transaction and roll it back at the end). However, if there is a more elegant solution I would rather use that.
Thanks,
Dave
It is true that doing what you are asking for is very difficult and it's an architectural trap I see a lot of projects get stuck in.
Even if this problem were solveable, you'd basically end up just having a data layer which just wraps the database and destroys performance because you can't leverage SQL properly.
Instead, consider building your data access service in such a way that it returns meaningful objects containing meaningful data; that is, only the data required to perform a specific task outlined in the requirements documentation. It is true that an post is related to an account, which has many achievements, etc, etc. But usually all I want is the text and the name of the poster. And I don't want it for one post. I want it for each post in a page. Instead, write data services and methods which do things which are relevant to your application.
To clarify, it's the difference between returning a Page object containing a list of Posts which contain only a poster name and message and returning entire EF objects containing large amounts of irrelevant data such as IDs, auditing data such as creation time.
Consider the Twitter API. If it were implemented as above, performance would be abysmal with the amount of traffic Twitter gets. And most of the information returned (costing CPU time, disk activity, DB connections as they're held open longer, network bandwidth) would be completely irrelevant to what developers want to do.
Instead, the API exposes what would be useful to a developer looking to make a Twitter app. Get me the posts by this user. Get me the bio for this user. This is probably implemented as very nicely tuned SQL queries for someone as big as Twitter, but for a smaller client, EF is great as long as you don't attempt to defeat its performance features.
This additionally makes testing much easier as the smaller, more relevant data objects are far easier to mock.
For three tier applications, especially if you are going to expose your entities "raw" in services, I would recommend that you disable Lazy Load and Proxy generation in EF. Your alternative would be to use DTO's instead of entities, so that the web services are returning a model object tailored to the service instead of the entity (as suggested by jameswilddev)
Either way will work, and has a variety of trade-offs.
If you are using EF in a multi-tier environment, I would highly recommend Julia Lerman's DbContext book (I have no affiliation): http://www.amazon.com/Programming-Entity-Framework-Julia-Lerman-ebook/dp/B007ECU7IC
There is a chapter in the book dedicated to working with DbContext in multi-tier environments (you will see the same recommendations about Lazy Load and Proxy). It also talks about how to manage inserts and updates in a multi-tier environment.
i had such a project which was the stressful one .... and also i needed to load large amount of data and process them from different angles and pass it to complex dashboard for charts and tables.
my optimization was :
1-instead of using ef to load data i called old-school stored procedure (and for more optimization grouping stuff to reduce table as much as possible for charts. eg query returns a table that multiple charts datasets can be extracted from it)
2-more important ,instead of Newtonsoft's JSON i used fastJSON which performance was mentionable( it is really fast but not compatible with complex object. simple example may be view models that have list of models inside and may so on and on or )
better to read pros and cons of fastJSON before
https://www.codeproject.com/Articles/159450/fastJSON
3-in relational database design who is The prime suspect of this problem it might be good to create those tables which have raw data to process in (most probably for analytics) denormalized schema which save performance on querying data.
also be ware of using model class from EF designer from database for reading or selecting data especially when u want serialize it(some times i think separating same schema model to two section of identical classes/models for writing and reading data in such a way that the write models has benefit of virtual collections came from foreign key and read models ignore it...i am not sure for this).
NOTE: in case of very very huge data its better go deeper and set up in-memory table OLTP for the certain table contains facts or raw data how ever in that case your table acts as none relational table like noSQL.
NOTE: for example in mssql you can use benefits of sqlCLR which let you write scripts in c#,vb..etc and call them by t-sql in other words handle data processing from database level.
4-for interactive view which needs load data i think its better to consider which information might be processed in server side and which ones can be handled by client side(some times its better to query data from client-side ... how ever you should consider that those data in client side can be accessed by user) how ever it is situation-wise.
5-in case of large raw data table in view using datatables.min.js is a good idea and also every one suggest using serverside-paging on tables.
6- in case of importing and exporting data from big files oledb is a best choice i think.
how ever still i doubt them to be exact solutions. if any body have practical solutions please mention it ;) .
I have fiddled with a similar problem using EF model first, and found the following solution satisfying for "One to Many" relations:
Include "Foreign key properties" in the sub-entities and use this for later look-up.
Define the get/set modifiers of any "Navigation Properties" (sub-collections) in your EF entity to private.
This will give you an object not exposing the sub-collections, and you will only get the main properties serialized. This workaround will require some restructuring of your LINQ queries, asking directly from your table of SubItems with the foreign key property as your filtering option like this:
var myFitnessClubs = context.FitnessClubs
?.Where(f => f.FitnessClubChainID == myFitnessClubChain.ID);
Note 1:
You may off-cause choose to implement this solution partly, hence only affecting the sub-collections that you strongly do not want to serialize.
Note 2:
For "Many to Many" relations, at least one of the entities needs to have a public representation of the collection. Since the relation cannot be retrieved using a single ID property.
Good morning,
I am working on new MVC4 apps which consist of a huge search form (more than 50 parameters).
After the form is submitted, a complex query is built and i obtained a IList of a viewModel:
IList<ResultViewModel> results = session.CreateSQLQuery(ComplexQuery).SetResultTransformer(Transformers.AliasToBean<ResultViewModel>()).List<ResultViewModel>();
InterfaceVM.QueryResults = results;
InterfaceVM is then the model used by my output views to display grid results and Leaflet maps.
Everything works fine except that i can get results up to 1 million records. I would need to implement paging using PagedList.MVC for example but without having to pass the search parameters in the URL. I would like to avoid rebuilding the query and the IList object again and again.
InterfaceVM.QueryResults =results.ToPagedList(pageNumber, 20);
Moreover, this IList results will also be used several times in my output views to generate dynamically complex GIS outputs.
I spent several hours reading over the web to find the best strategy to keep my result object persisting after the form has been submitted to be able to manipulate it easily and avoiding to rebuilt it for each paging/mapping request.
I read about temp table in SQL server, Session object, MemoryCache etc... but i don't know what would better fit my situation.
Your input on this would be really helpful. I am using Fluent NHibernate, MVC4, SQL server 2008 R2.
Thanks in advance for your help
Sylvain
I am working on my first ASP.Net MVC Application . I am using Razor view and MVC version is 3.
I am getting data for my model from the database in some raw format. Then i do some processing on it like concating strings, format date columns using some linq queries. Everything on the data that i already have in the model class.
I wonder all this code execution is happening on server. I want to move this burden from my server to client machines, i want to pass this raw data to my view and then write code in view to do this loop things and formatting etc.
I just want to confirm if this is a good approach to move on and if this really release some burden from my server.
Thanks
C# or VB that is written in your view is not client side. It's still server side. It's used to manipulate the rendering of the HTML before being passed to the client.
You'd have to pass all your raw data then process it with JavaScript.
Performance of your app would depend on each client machine and therefore not be consistent between 2 users. Maintenance would be tricky as a result.
To cut a long story short I would not recommend it. Your server is most likely designed to handle load. It's the right place for this sort of thing.
And I'd also read up on server / client side code execution relations. Simply doing MVC is a good start. It naturally teaches you how the web works more so than using web forms.
This may be pre-mature optimization. I doubt whether a server would be bogged down performing simple things like string concatenation, formatting, etc.
I suggest you only focus on this if you run into performance issues; else it isn't worth the effort.
I think you confused a View in MVC with browser HTML. You can have any C# code in View but not in browser. If you question is about doing business logic in View is then, read further.
MVC allows to plug any kind of view (mobile,desktop,web apps) to a controller. So view should not do any business logic else you will end up duplicating the business logic.
This LINK will help your understanding of MVC.
Data formatting in view not at all an issue but when you start implementing business logic in view that is the issue (I guess that's what you meant by use of linq in a View).
For instance inside a View, using linq on Model entities to loop through a collection and create a HTML table is perfectly all right, infact ViewModel should be tightly coupled to View.
[Trying hard not to be philosophical here but :)] In the end whatever architecture you use DO NOT limit yourself from thinking out of the box. Almost all projects use multiple patterns and most (good) developers solve business problems by unknowingly implementing a pattern. :)
I had a bit of a shock recently when thinking about combining a service oriented architecture with a brilliant UI which leverages SQL to optimize performance when querying data.
The DevExpress grid view for ASP.NET, for example, is so cool that it delegates all filtering, sorting and paging logic to the database server. But this presumes that the data is retrieved from a SQL-able database server.
What if I want to introduce a web service layer between the database and UI layers, and to have the UI use the web services to query the data?
How can I design the web services and the UI such that I can pass filtering requests from the UI via the web services to the database?
Do I need to provide a List QueryData(string sqlQuery) style web service and have to parse the SQL string on my own to guarantee security/access restriction?
Or is there any good framework or design guideline that takes this burden from me?
This must be a very common problem, and I am sure that it has been solved relatively adequately already, has it?
I am mainly interested in a .NET/C#-based or -compatible solution.
Edit: I've found OData and Microsoft WCF Data Services. If I got it right, an OData-based application could look as follows:
User ---/Give me Page 1 (records 1..10)/---> ASP.NET Server Control (of course, via HTTP)
ASP.NET Server Control ---/LINQ Query/---> Data service client
Data service client ---/OData Query/---> WCF Data Service
WCF Data Service ---/LINQ Query/---> Entity Framework
Entity Framework ---/SQL Query/---> Database
If I got this right, my DevExpress server control should be able to delegate a filtering request (e.g. give me the top 10 only) through all these layers down to the database which then applies its indexes etc. in order to perform that query.
Is that right?
Edit: It is a joy to see this thread coming to life :-) It is hard to decide on what answer to accept because all seem equally good to me...
Really interesting question! I don't think there's a right or wrong answer, but I think you can establish some architectural principles.
Firstly, "Service Oriented Architecture" is an architectural style that requires you to expose business services for consumption by other applications. Running a database query is not a service - in my opinion at least. In fact, providing a web service to execute arbitrary SQL is probably an anti-pattern - you would bypass the security model most database servers provide, you'd have no control over the queries - it's relatively easy to write a syntactically correct "select" query which cripples your database (Cartesian joins are my favourite), and the overhead of the web service protocol would make this approach several times slower than just querying the database through normal access routes - LINQ or whatever.
So, let's assume you accept that point of view - what is the solution to the problem?
Firstly, if you want the productivity of using the DevExpress grid, you probably should work in the way DevExpress want you to work - if that means querying the database directly, that's by far the best way to go. If you want to move to a SOA, and the DevExpress grid doesn't support that, it's time to find a new grid control, rather than tailor your entire enterprise architecture to a relatively minor component.
Secondly - structurally, where should you do your sorting, filtering etc? This is an easy concept in SQL, but rather unpleasant when trying to translate it to a web service specification - you quickly end up with an incomprehensible method signature ("getAccountDataForUser(userID, bool sortByDate, bool sortByValue, bool filterZeros, bool filterTransfers)").
On the other hand, performing the filtering and sorting on the client is messy and slow.
My recommendation would be to look at the Specification Pattern - this allows you to have clean method signatures, but specify the desired sorting and ordering in a consistent way.
Implementing the List QueryData(string sqlQuery) will open you up to a near infinite number of security problems.
If you need to filter based on security access, then the OData implementation will not be trivial either, you need to setup proper authorization/authentication on the WCF service so that you could further filter the OData query based on the authenticated user data.
The easiest way to implement server side data operations when the data is retrieved from a WCF service would be to intercept the Grid's sort/filter operations in the code behind, and then call a specialized method on the WCF service based on what the user is doing.
"This must be a very common problem, and I am sure that it has been
solved relatively adequately already, has it?"
Given the number of skinned cats laying around the developer world, I'd have to say no.
WCF Data Services offers the best solution I've found so far, but there authentication and authorization can be tricky. There is a decent post covering the server-side issues around this at http://blogs.msdn.com/b/astoriateam/archive/2010/07/19/odata-and-authentication-part-4-server-side-hooks.aspx. Setting this up isn't easy, but it does work well.