using Bulk amount of data through out the MVC session - c#

I have to use a bulk amount of data through out the session for each user in MVC application (data is user specific)
Below are the two method I found it
Use a session object load the data into it and retrieve when ever needed
Create a XML file into the disk and retrieve the data when ever required
which one of the above is good for a web application?
If both are not good please let me know a convenient method.
I read about caching but that also take the memory concern as Session or not?
Please help me with simple example if anyone has gone through the same.

I will not go with any of the approaches you highlighted, both the approaches will give you problems and code modification will be required when you plan to scale you application and try to move to load balancer or web farm type of environment.
You should try to make your application as much stateless as possible.
You can store the data in database, if your tables are properly indexed it will not take much time to fetch the data (almost same as fetching data from a xml file).
If the data is not sensitive and you requires it in client side only, you can think of storing it into sessionStorage

Related

would it be faster to parse XML from url or to save the informations in a database

I am relatively new to the development of a website with asp.net (C#) and SQL server 2008 (until now I was using php/mySQL). And I am currently facing some choices and any suggestion to help me make them would be really appreciated:
I have a Database (let's call it DB_A) wich is running under SQL server Express (2008). And I can't modify it (but I can use it or do a copy (DB_C) . The problem is that this DB is updated every day, so is it possible to do an update of the copy periodically as well ?
Furthermore, I have to take some informations by parsing an XML file which is in the web.
So I was wondering if it was faster to parse it directly with C#, whenever there is a request to the pages that need these informations or would it be better/faster to parse it once periodically (automatically) and save the datas in SQL tables (in DB_C) ?
But I don't know if the last method is even possible.
One scenario would fold like this : the website only connects to DB_A and the others needed informations would be directly gotten by parsing the XML (from an url ).
Does someone has any suggestions about this please ?
For the first part, Copying the database and sync changes:
I suggest implementing a Replication solution, replicate data from main DB to your copy.
Check Microsoft article here for some info of the available replication scenarios that you can select from.
For the second part, parsing the XML data:
This depends on the data itself; If the data is very large, needs a lot of processing, business validation..etc, and the data is not needed to be updated immediately, sure you wont have to parse it by each request
Just run a periodic parser to work on the XML that is sent to you, a separate job from your website i.e: a service/a batch/... etc
Otherwise (data is sensitive), I think you need to parse it with each request.
But it all depends on the business requirements of your webapp

Best structure other than Datatable to store ~half-a-million records in Frontend

I have a Datatable which is fetching ~5,50,000 records from database(SQLite). When fetching it is slowing down the system.
I am storing these records on backend in SQLite Database and in frontend in Datatable.
Now what should i do so that database creation time(~10.5 hours) in backend and fetching time in Front end reduces.
Is there any other structure that can be used to do so. I have read that Dictionary & Binary File is fast. Can they be used for this purpose & how?
(Its not a web app. its a WPF desktop app where frontend and backend are on same machine).
Your basic problem, I believe, is not the strucutre that you want to maintain but the way you manage your data-flow. Instead of fetching all data from database into some strcuture (DataTable in your case), make some stored procedure to make a calculations you need on server-side and return to you data already calculated. In this way you gain several benefits, like:
servers holding database servers are usually faster then development or client machine
huge reduction of data trasmission, as you return only result of the calculation
EDIT
Considering edited post, I would say that DataTable is already highly optimized on in memory access, and I don't think changing it to something else will bring you a notable benefits. What I think can bring a banefit is a revision of a program flow. In other words, try to answer following questions:
do I need all that records contemporary ?
can I run some calculations in service, let's say, on night ?
can I use SQL Server Express (just example) and gain benefits of possibility to run a stored procedure, that may (have to be measured) run the stuff faster, even on the same machine ?
Have you thought about doing the calcuation at the location of the data, rather than fetching it? Can you give us some more information as what it's stored in and how you are fetching it and how you are processing it please.
It's hard to make a determination for a optimsation without the metrics and the background information.
It's easy to say - yes put the data in a file. But is the file local, is the network the problem, are you make best use of cores/threading etc.
Much better to start back from the data and see what needs to be done to it and then engineer the best optimisation.
Edit:
Ok so you are on the same machine? One thing you should really consider in this scenario is what are you doing to the data. Does it need to be SQL? If you are just using is to load a datatable? Or is a complexity you are not disclosing?
I've had a similar task- I just created a large text file and used memory mapping to read is efficicently without any overhead. Is this the kind of thing you're talking about?
You could try using a persisted dictionary like RaptorDB for storing and fetching/manipulating the data in an ArrayList. The proof of concept should not take long to build.

Best place for holding values in asp.net?

Hi
I working on Asp .Net project(Web App).
I have to many values, properties and fields. In order to use them I need to save them in somewhere.(I know about Session, Application and Cookie).
application isn't good cuz it is same for all user's but session is better(I need these values in server-side). Also cookie isn't good and I won't use that for security problem.
Session is good but if I use session a lot I must pay for massive memory on Host-Server.
So is there any better place can hold them ?
Edit 1
For more Information recently I fetch these values from database (so I won't to save them in database). and also use Wcf service for get these values from database. I want to hold these values to use them (for example sending them for service-method to do something or do something visually in page).
Thanks All
As has been commented, there are many ways of implementing state management, depending on the amount of data your looking to persist, overall volume of traffic, hosting costs, maintainabilty, etc.
This MS link describes the pros and cons of some of the techniques.
Yes, Session is the best way to use. Actually its consumes the comparatively least memory of all while using the server side state management technique.
If the values are much higher than you can use to store those values in database with the sessionid as the key with them. So that it will consume some what less memory from the server.
A database could be a good idea. Maybe SQL Server Express or SQL Server Compact Edition

Effective way for SQL

I have a project which we are thinking of optimizing some parts of the program.
It is basically a 3 tier hierarchy structure and below will be roughly how we structured.
UI - aspx and aspx.cs
Business Logic- Web Services
Data Access- a separate class residing the web service project.
Example:
The practice now is that when we retrieve data, we retrieve in terms of dataset from the web service and return it to the UI to process the displaying of data. In order to make changes to the database, the changed dataset is passed back to the web service. For small number of results, it would not pose much issue, but when the result is huge, web service will also be passing a larger xml to and forth, which greatly decrease performance.
So does anyone happen be using this type of structure and have a better way of processing database results? Right now, this practice is used in all CRUD, so the idea that came to me is for create and delete, or maybe even update, we can skip the passing of dataset, and use a linear command, but I am not sure is this a good method
If by dataset you mean DataSet then this is something that should never be passed to and from a web service. I would recommend you using POCO classes when communicating with the service. A DataSet already does XML serialization/deserialization internally and when you send it to a web service this serialization/deserialization will occur twice. Another tip is not to resend the entire dataset back to the web service but only what has changed in between the calls to reduce the network traffic.
Rule of thumb: In web applications you should exchange only that data between client and server which is needed. I mean all filtering, etc should be done server side.
Some ideas are: Employ paging, avoid datasets, ask for only that information to your WS which you intend to show!

How to improve performance of XtraReports using a web service to get the data?

I'm facing a potential problem using the XtraReports tool and a web service about performance. in a Windows Form app.
I know XtraReport loads large data set (I understand a large data set as +10,000 rows) by loading the first pages and then continue loading the rest of the pages in the background, but all this is done with a data source in hand. So what happens if this data source has to pass through a web service, which will need to serialize the data in order to send it to the client?
The scenario is the following:
I have a thin client in windows form that makes calls to a web service, which takes that call and by reflection instantiates the corresponding class and calls the required method (Please notice that this architecture is inherited, I have almost no choice on this, I have to use it). So I'll have a class that gets the data from the database and send it to the client through the web service interface. This data can be a DataSet, SqlDataReader (also notice that we're using SQL Server 2000, but could be 2008 by the end of the year), DataTable, XML, etc.
If the result data set is large, the serialization + transference time can be considerable, and then render the report can add some more time, degradating the overall performance.
I know that there is a possibility for using something like streaming video, but for streaming data through a web service but I have no lead info for trying something around it.
What do you think about this? Please let me know any questions you may have or if I need to write more info for better statement of the problem.
Thanks!
I'll give you an answer you probably don't want to hear.
Set expectations.
Reports are typically slow because they have to churn through a lot of data. There just isn't a good way to get around it. But barring that, I'd do the following:
Serialize the data load to a binary state, convert to something transferable via soap (base64 for instance) and transfer that. This way you'll avoid a lot of useless angle brackets.
Precache as much data on the client as possible.
Focus on the perceived performance of the application. For instance, throw the report data gathering onto a background thread, so the user can go and do other work, then show the user a notification when the report is available.
Sometimes, it is possible to generate the report for the most often used criteria ahead of time and provide that when asked.
Is there a way you can partition the data, ie return a small subset of it?
If there's no absolute requirement to return all the data at the same time, then dividing up the data into smaller pieces will make a huge difference when reading from the database, serializing, and transporting through the webservice.
Edit: I had deleted this answer since you already mentioned partitioning. I'm undeleting it now, maybe it will serve some use as the starting point of a discussion...
As for your point about how to work with paging: I think you're on the right track with the "Show next 100 results" approach. I don't know how XtraReport works, and what its requirements for a datasource are. There are 3 issues I see:
Server support for partitioned data (eg your webservice should support returning just "page 3"'s data
Summary Data - does your report have a row of totals or averages? Are those calculated by the XtraReport control? Does it require a complete dataset to display those results? Can you provide the summaries to the control on your own (and find a more efficient way to calculate them without returning the entire data set?)
XtraReport support for a datasource that uses paging.
transfering DataSets is a bad idea. DataSet have a lot of unusefull date. Use Simple Objects. Use ORM on server side.
Also you can precalculte some data. Referencies can be cached on client and then joined with server data.

Categories

Resources