I need your help.
How to navigate through records in 3-tire application of asp.net with c#?
Suppose like i am returning record set from dataacess layer to my UI layer then what is the better way to navigate it like next,previous,first and last record.
Thanks in Advance
You should adopt a record pagination strategy for data access, whereby data is retrieved in pages of N rows at a time, and allow data from a query to be retrieved from an arbitrary offset (i.e. You shouldn't hold database resources such as a connections or cursors open while the user decides what next to do).
The client itself will need to maintain this state (i.e. the current page the user is viewing is stored on browser), such as which record / page the user is currently browsing.
Each time the user navigates to a new page, you will need to fetch appropriate batch of data from the, remembering any filters that are applicable (Page Number, Page Size and Start Record are common models here).
You haven't mentioned which database you are using, but most RDBMS systems have pagination friendly functions such as OFFSET and LIMIT (and even Sql Server 2012 now has OFFSET / FETCH NEXT N ROWS). Linq based ORMs then expose these as easy to use functions for paging as Skip() and Take() respectively.
Related
I have built a web service in C#/WebApi2. It is completely REST based, and scales horizontally very easily with a load balancer in front of it since it has no state itself.
However, I'm looking for info/solution on how to handle the database scalability, and I would like to start without focusing on any particular technology, more specific, I would like to use Dapper ORM In combination with multiple DB's if possible.
For example, I can connect to a PostgreSQL using Dapper and the NGPSQL ADO.NET driver, but, are there components which handle the case of having one master PGSQL database and four slaves to read from? Are there already C# components that handle these situations, where you can have connections to all of these DB's and depending on the operation it chooses either the master in case of write actions or slave in case of read and load balances over the slaves (since the number of reads will be significantly higher than the writes, this would be a fairly good solution).
What if I have a master - master situation? And what about similar situations with other DB's such as MS SQL with AlwaysOn for example, or MySQL cluster and it's variations? Is there any components to handle this kind of thing, and if not, does anybody have any pointers on documentation/lectures/blogs/tutorials on this topic. I cannot imagine I'm the first one to encounter this, and writing a completely custom made connection pool might be just re-inventing the wheel...
I know it is a general question, but I have the feeling there should have been done work regarding this topic, I just can find it. I know in cloud scenarios, Azure and AWS, you have solutions for this a specific load balancers, but, I would need this for an on-premise solution as well. Any info would be appreciated.
One way to scale a database horizontally is to split your database into multiple databases - each having different set of data. Something like this:
Meta database (that has info on user, etc)
- Database 1 (has data for first 100000 users)
- Database 2 (has data for next 100000 users)
- Database 3 (has data for next 100000 users)
Your API requests would route the query to the respective database based on info from Meta database.
This provides for scalability but not availability. Many multi-tenant SAAS apps use this structure.
Some references:
http://jamesgolick.com/2010/3/30/what-does-scalable-database-mean.html
https://developer.salesforce.com/page/Multi_Tenant_Architecture
I have some SQL Server Store Procs that generates statistical data for charting in a C# web application.
Right now the user in the web app has to wait about 5 minutes to see these charts with updated data and this is a pain in the neck for the user and for me.
Some of the Store procs takes more than 5 minutes to generate the data but the web user don't need to see the info on the fly. Maybe update the chart every 2-3 hours.
So, I dont know what is the best practice to solve this.
I was thinking on creating a windows service that every 2-3 hours will call the SP's and then store the data in different tables.
Any clue on how to deal with this?
Appreciate the help
As I said in the comments, indexed views (kind of like materialized views) can increase performance of certain common queries without having to make temporary tables and things like that.
The benefits of indexed views are performance and that it doesn't require much extra coding and effort. When you create an indexed view as opposed to a temp table, the query navigator will (should) know when to take advantage of this view, without the end user needing to specify a temp or aggregate table.
Examples of the benefits of indexed views and how to implement them can be found here http://msdn.microsoft.com/en-us/library/dd171921(v=sql.100).aspx
here are some links to indexed views. Like the comments said, views allow you to quickly get information rather then always doing a select every time using a stored proc. Read the second link for a very good explanation about views.
MSDN
http://msdn.microsoft.com/en-ca/library/ms187864%28v=sql.105%29.aspx
Very well explained here
http://www.codeproject.com/Articles/199058/SQL-Server-Indexed-Views-Speed-Up-Your-Select-Quer
I'm a C++ programmer and I'm not familiar with the .NET database model. I usually use IDataReader (OdbcDataReader, OledbDataReader or SqlDataReader) to read data from database. Sometimes when I need a bulk of data I use DataAdapter, but what should I do to achieve the functionality of scrollable cursors that exists in native libraries like ODBC?
Thanks all of you for your answers, but I am in a situation that I can't accept them, of course this is my fault that didn't explain my problem completely. I explain it as a comment in one of answers that now removed.
I have to write a program that will act as a proxy between client side program and MSSQL, for this library I have following requirements:
My program should be compatible with MSSQL2000
I don't know all the tables and queries that will be sent by the user, I should simply add some information to it, make a log, ... and then execute it against MSSQL, so it is really hard to use techniques that based on ordered field(s) of the query or primary key of the table(All my works are in one database but that database is huge and may change over time).
Only a part of data is needed by the client, most DBMS support LIMIT OFFSET, unfortunately MSSQL do not support it, and ROW_NUMBER does not exist in the MSSQL2000 and if it supported, then again I need to understand program logic and that need a parse of SQL command(Actually I write a parsing library with boost::spirit but that's native code and beside that I'm not yet 100% sure about its functionality).
I may have multiple clients but most of queries that will be sent by them are one of a few predefined queries(of course users still send custom queries but its about 30% of all queries), So I think I can open some scrollable cursors and respond to clients using that cursors and a custom cache.
Server machine and its MSSQL will be dedicated to my program, so I really want to use all of the power of the server and DBMS to achieve my functionality.
So now:
What is the problem in using scrollable cursors and why I should avoid them?
How can I use scrollable cursors in .NET?
In SQL Server you can create queries paged thus. The page number you handle it easily from the application. You do not need to create cursors for this task.
For SQL Server 2005 o higher
SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY ID) AS ROW FROM TABLEA ) AS ALIAS
WHERE ROW > 40
AND ROW <= 49
For SQL Server 2000
SELECT TOP 10 T.* FROM TABLA AS T WHERE T.ID NOT IN
( SELECT TOP 39 id from tabla order by id desc )
ORDER BY T.ID DESC
PD: edited to include support for SQL Server 2000
I usually use DataReader.Read() to skip all rows that I do not want to use when doing paging on a DB which do not support paging.
If you don't want to build the SQL paged query yourself you are free to use my paging class: https://github.com/jgauffin/Griffin.Data/blob/master/src/Griffin.Data/BasicLayer/Paging/SqlServerPager.cs
When Microsoft designed the ADO.NET API, they made the decision to expose only firehose cursors (IDataReader etc). This may or may not actually pose a problem for you. You say that you want "functionality of scrollable cursors", but that can mean all sorts of things, not just paging, and each particular use case can be tackled in a variety of ways. For example:
Requirement: The user should be able to arbitrarily page up and down the resultset.
Retrieve only one page of data at a time, e.g. using the ROW_NUMBER() function. This is more efficient than scrolling through a cursor.
Requirement: I have an extremely large data set and I only want to process one row at a time to avoid running out of memory.
Use the firehose cursor provided by ADO.NET. Note that this is only practical if (a) you don't need to hit the database at all during the loop, or (b) you have MARS configured in your connection string.
Simulate a keyset cursor by retrieving the set of unique identifiers into an array, then loop through the array and read one row of data at a time.
Requirement: I am doing a complicated calculation that involves moving forwards and backwards through the resultset.
You should be able to re-write your algorithm to eliminate this requirement. For example, read one set of rows, process them, read another set of rows, process them, etc.
UPDATE (more information provided in the question)
Your business requirements are asking too much. You have to handle arbitrary queries that assume the presence of scrollable cursors, but you can't provide scrollable cursors, and you can't re-write the client code to not use scrollable cursors. That's an impossible position to be in. I recommend you stick with what you currently have (C++ and ODBC) and don't bother trying to re-write it in .NET.
I don't think cursors will work for you particular case. The main reason is that you have 3 tiers. But let's take two steps back.
Most 3 tier applications have a stateless middle tier (your c++ code). Caching is fine since it really just an optimization and does not create any real state in the middle tier. The middle tier normally has a small number of open sessions to the database. Because opening a db session is expensive for the processor, and after the db session is open a set amount of RAM is reserved at the database server. When a request is received by the middle tier, the request is processed and handed on to the SQL database. An algorithm may be used to pick any of the open sessions, or it can even be done at random. In this model it is not possible to know what session will receive the next request. Cursors belong to the session that received the original query request. So you can't really expect that the receiving session will be the one that has your open cursor.
The 3 tier model I described is used mainly for web applications so they can scale to hundreds or thousands of clients. Were SQL servers would never be able to open that many sessions. Microsoft ADO.NET already has many features to support the kind of architecture I described, so it is not very hard to implement. And the same is used even in non Web applications depending on the circumstance. You could potentially keep track of your sessions so you could open a single session per client, I would first make sure that the use case justifies that. Know that open cursors can take up a lot of resources as well.
Cursors still have a place within a single transaction, it's just hard to keep them open so that the client application can fetch/update values within the result set.
What I would suggest its that you do the following within the query transaction. Store in a separate table the primary key values of the main table in your query. On the separate table include other values like sessionid and rownumber. Return a few of the first rows by linking to the new table in the original query. And in subsequent calls just query the corresponding rows again by linking to your new table. You will need an equivalent to a caching mechanism to purge old data, and to refresh the result set according to your needs.
I am using Entity Framework, ASP.NET Web API and there is a WinForm client that sends its data to the server or receives.
I know how to get one record and also how to get a collection of records at one time, but there maybe a lot amount of records which may cause problems. So how can I get a collection of new information one by one in the client side?
I think I can get a list of new ID's first and then getting them one by one. Is there a better approach?
any information about implementing this will be useful.
EDIT: to be clear, I mean getting a collection of information from server in client machine and not in server from database :)
Paging is the answer. Below is a great link to paging with Entity Framework and MVC using Skip and Take. You basically specify the starting index (or "skip") and then the number of records you want to get (or "take"). Like this:
context.YOURDBSET.Skip(0).Take(10); //Gets the first 10.
context.YOURDBSET.Skip(10).Take(10); //Gets the next 10.
context.YOURDBSET.Skip(20).Take(10); //Gets the next 10.
etc.etc.etc.
Here's that link: http://msdn.microsoft.com/en-us/magazine/gg650669.aspx
A very common approach to this problem is to create some form of paging. For instance, the first time, just get the first 50 records from the server. Then based on some condition (could be user triggered, or automatic based on time, depends on your application) get the next set of data, record 51 - 100.
You can get creative with how you trigger getting the various pages of data. For instance, you could trigger the retrieval of data based on the user scrolling the mouse wheel if you need to show the data that way. Ultimately this depends on your scenario, but I believe paging is the answer.
Your page size could be 5 records, or it could be 500 records, but the idea is the same.
I've a US city/state list table in my sql server 2005 database which is having a million records. My web application pages are having location textbox which uses AJAX autocomplete feature. I need to show complete city/state when user types in 3 characters.
For example:
Input bos..
Output:Boston,MA
Currently, performance wise, this functionality is pretty slow. How can i improve it?
Thanks for reading.
Have you checked in the indexes on your database? If your query is formatted correctly, and you have the proper indexes on your table, you can query a 5 million row database and get your results in less then a second. I would suggest to see if you have an index on the City with added column State onto the index. That way when you query by city, it will return both the city and state from the index.
If you run your query in sql management studio and press ctrl-m you can see the execution plan on your query. If you see something like table scan or index scan then you have the wrong index on your table. You want to make sure your results have an index seek, this means that your query is going through the proper pages in the database to find your data.
Hope this helps.
My guess would be that the problem you're having is not the database itself (although you should check it for index problems), but the amount of time that it takes to retrieve the information from the database, put it in the appropriate objects etc., and send it to the browser. If this is the case, there aren't a lot of options without some real work.
You can cache frequently accessed information on the web server. If you know there are a lot of cities which are frequently accessed, you can store them up-front and then check the database if what the user is looking for isn't in the system. We use prefix trees to store information when a user is typing something and we need to find it in a list.
You can start to pull information from the database as soon as the user starts to type and then pair down full result set down after you get more information from the user. This is a little trickier, as you'll have to store the information in memory between requests (so if the user types "B", you start the retrieval and store it in a session. When the user is done typing "BOS" the result set from the initial query is in memory temporarily and you can loop through and pull the subset that matches the final request).
Use parent child dropdowns