I'm trying to determine the best approach for performing paging.
I have two options of grabbing data using SubSonic:
1) itemDatumCollection.LoadAndCloseReader(sp.GetReader());
or
2) itemsDataSet = sp.GetDataSet();
With both I am accessing the same stored procedure. Is there a simple way of paging with LoadAndCloasReader()?
I could load all the data through GetDataSet on the client - say 4000 rows, but seems unnecessary, and this amount of data exceeds my WCF binding parameters (which I think are set pretty good) when I use LoadAndCloseReader() as it returns a complex object:
maxBufferSize="20000000" maxBufferPoolSize="524288" maxReceivedMessageSize="20000000"
So a couple of things I guess:
1) Is GetDataSet() faster at returning data - I don't need the complex collection object (it's just nice when coding)?
2) How can I perform paging using my TSQL sproc?
Thanks.
I went with an approach used more in MVVM, rather than MVC. I loaded all the data up front, then allowed the user to page through it using a jQuery paging control, there-by minimizing return calls to the database. The initial response takes a bit longer (but is limited to 100 records), but provides the user with lightening-fast paging.
I used the Simple Pager jQuery plugin.
Related
I have asp.net web api application. In database I have a big list (between 100.000 and 200.000) of pairs like id:name and this list could be changed quite rarely. I need to implement filtering like this /pair/filter?fragment=bla. It should return first 25 pairs where any word in name starts with word fragment. I see two approachs here: 1st approach is to load data into cache (HttpRuntimeCache, redis or smth like this) to increase loading time and filter in linq. But I think there will be problems with time required for serialiazing/deserialiazing. Another approach: for instance I have a pair 22:some title here so I need to provide separate table like this:
ID | FRAGMENT
22 | some
22 | title
22 | here
with primary key on both columns and separate index on FRAGMENT column to make queries faster. Any offers and remarks are welcome.
UPD: now I've refreshed my mind. I don't want to query database because requests happen quite often. So now I see the best solution is
load entire list in memory
build trie structure which keeps hashset of values in each node
in case of one text fragment - just return the hashset from trie node, in case of few fragments - find all hashsets and get their intersection
You could try a full-text index on your current DB (if its supported) and the CONTAINS keyword like so
SELECT * FROM tableName WHERE CONTAINS(name, 'bla*');
This will look for words starting with "bla" in the entire string, and also match the string "Monkeys blabla"
I dont really understand your question but if you want to query any table you can do so since you already have the queryString. You can try this out.
var res = _repository.Table.Where(c => c.Name.StartsWith("bla")).Take(25);
If it doesnt help. Try to to restructure your question a little bit.
Is this a case of premature optimization?
How many users will be hitting this service simultaneously? How many will be hitting your database simultaneously? How efficient is your query? How much data will be returned across the wire?
In most cases, you can't outsmart an efficient database for performance. Your row count is too small to create a truly heavy burden on your application's runtime performance when querying. This assumes, of course, that your query is well written and that you're properly opening, closing, and freeing resources in a timely fashion.
Caching the data in memory has its trade-offs that should be considered. It increases the memory footprint of your application, and requires you to write and maintain additional code to maintain that cache. That is by no means prohibitive, but should be considered in view of your overall architecture.
Consider these things carefully. From what I can tell, keeping this data in the database is fine. Deserialization tends to be fast (as most of the data you return is native types), and shouldn't be cost-prohibitive.
I want to implement server side paging in my Silverlight application. To get an idea of the steps that I would require I went through this custom paging in asp.net
article where they described how to design a SQL query to return the results according to the Page Requested and the Total no of records per page. I am however totally confused as to how am I going to call it from my Silverlight application. Like how am I going to specify it in the c# code.
The default paging using the DataPager is pretty simple.
PagedCollectionView pagingCollection = new PagedCollectionView(e.Result); //e.Result contains `List` returned by the method that calls the stored procedure GetProducts
pagerProductGrids.Source = pagingCollection;
gridProductGrid.ItemsSource = pagingCollection;
But I'm clueless about the procedure of doing it on my own. Like what properties I will be needing to get and set the Page Size, the total no of records etc i.e how am I going to configure my DataGrid and DataPager to pass StartingRowIndex and Maximum RowcOunt
Please help!
I came across this article a few years ago and this worked like a charme for me. I've added this to my framework and have been reusing this method ever since. The article is well explained and I believe this is exactly what you are looking for.
Paging Data from the Server with Silverlight
I am reading a huge amount of data from a SQL Server database (approx. 2.000.000 entries) and I want to print it to the end-user in a winforms GridView.
First Approach
First idea was using SQLDataReader which normally doesn't take too much time to read from tables with aound 200.000 entries. But uses too much memory (and time !) in the case above.
Actual Solution
The actual solution used is reading from the database through LINQ (dbml file) which is fine because it plugs the components directly to the DB server. It loads data on-the-fly, which is really great.
Problems
The problems are :
When I plug my grid view to the FeedBack Source, it seemed to me that I couldn't read the columns of my grid through code.
This is for plugging my LookUpSearchEdit to the source :
slueTest.Properties.DataSource = lifsTest; // This is the LinqInstantFeedbackSource
gvTest.PopulateColumns();
When I do :
gv.Columns["FirstColumn"] // "FirstColumn" is the name of the field in the LINQ Class (and in the DB)
It raises an exception ...
The data in the FeedBackSource is not accessible at all ...)
I lost all the features of the LookUpSearchEdit and I think it is because the data is read on-the-fly (sorting, searching, etc.).
Questions
Am I doing this right ? Or is there a better way to print a lot of data from a DB without consuming lots of memory / time?
Here's a problem I experience (simplified example):
Let's say I have several tables:
One customer can have mamy products and a product can have multiple features.
On my asp.net front-end I have a grid with customer info:
something like this:
Name Address
John 222 1st st
Mark 111 2nd st
What I need is an ability to filter customers by feature. So, I have a dropdown list of available features that are connected to a customer.
What I currently do:
1. I return DataTable of Customers from stored procedure. I store it in viewstate
2. I return DataTable of features connected to customers from stored procedure. I store it in viewstate
3. On filter selected, I run stored procedure again with new feature_id filter where I do joins again to only show customers that have selected feature.
My problem: It is very slow.
I think that possible solutions would be:
1. On page load return ALL data in one viewstate variable. So basically three lists of nested objects. This will make my page load slow.
2. Perform async loazing in some smart way. How?
Any better solutions?
Edit:
this is a simplified example, so I also need to filter customer by property that is connected through 6 tables to table Customer.
The way I deal with these scenarios is by passing in Xml to SQL and then running a join against that. So Xml would look something like:
<Features><Feat Id="2" /><Feat Id="5" /><feat Id="8" /></Features>
Then you can pass that Xml into SQL (depending on what version of SQL there are different ways), but in the newer version's its a lot easier than it used to be:
http://www.codeproject.com/Articles/20847/Passing-Arrays-in-SQL-Parameters-using-XML-Data-Ty
Also, don't put any of that in ViewState; there's really no reason for that.
Storing an entire list of customers in ViewState is going to be hideously slow; storing all information for all customers in ViewState is going to be worse, unless your entire customer base is very very small, like about 30 records.
For a start, why are you loading all the customers into ViewState? If you have any significant number of customers, load the data a page at a time. That will at least reduce the amount of data flowing over the wire and might speed up your stored procedure as well.
In your position, I would focus on optimizing the data retrieval first (including minimizing the amount you return), and then worry about faster ways to store and display it. If you're up against unusual constraints that prevent this (very slow database; no profiling tools; not allowed to change stored procedures) than please let us know.
Solution 1: Include whatever criteria you need to filter on in your query, only return and render the requested records. No need to use viewstate.
Solution 2: Retrieve some reasonable page limit of customers, filter on the browser with javascript. Allow easy navigation to the next page.
Hello i am developing a Database web application and i am having many reports to populate. I just want to know which one is the Best method among the following which will give me fast and accurate result as the data is going to be in 1000's.
Through populating Dataset?
Through DataReader ?
Through Array List ?
I am using 3 tier architecture. so what if i am writing a function which would be the most appropriate return type of the function in DATA Access Layer ?
You can use "push" method to set the data with a DataSet - this will give you the advantage to set the datasource for the main report and all subreports in one call to the database. However there are some limitations, for example you will be not able to use subreports in the details section.
I am not sure you can use datareader and array list as datasources. Even if you can I cannot see any advantages. Using datareader means that you will keep your connection to the database open while report is rendered ( the first pass). This may take some time and is not necessary. Array list ( if can be used) will allow you to set the data for one table - it is a flat structure - no relations. In most of the cases you probably will load the array-list from the database anyway so it will not make sense to get the data load it in an array and use the array to set one table if you can use a dataset.
Why you are ignoring the regular "pull" method ? It will be simpler.