I have a table with more than 10,000,000 Rows.
I need some filters (some in queries and some like queries) and dynamic order by
I wondered what is the best way to work with big data, Pagination, Filtering and ordering.
Of course its easy to work with entity framework, But I think the performance better on stored procedure
I have a table with more than 10,000,000 Rows.
You have a small table, nearly tiny, small enough to have no problems for anyone now abusing the server.
Seriously.
I wondered what is the best way to work with big data,
That starts with HAVING big data. That is generally defined a smultiple times RMA of a low cost server. Which today has around 16 cores and around 128gbm memory. After that it gets expensive.
General rules are:
DO NOT PAGE. Paging at the start is easy, but gtting to the end results is slow - either you precalculate the pages and store them OR you ahve to reexecute queries just to throw away results. It works nice on page 1-2, then it gets slower.
Of course its easy to work with entity framework, But I think the
performance better on stored procedure
And why would that be? The overhead of generating the query is tiny, and contrary to often repeated delusions - SQL Server uses query plan caching for everything. A SP is faster - if the compilation overhead is significant (i.e. SMALL data), or if you otherwise pull a lot of data over a network in order to send results back (processing only in database).
For anyhing else the "general" performance impact is close to zero.
OTOH it allows you to send much more tailored SQL without geting into really bad and ugly stored procedures - that either issue dynamic SQL internally, or have tons of complex conditions for optional parameters.
What to be careful with:
IN clauses can be terrible for performance. DO not put hundreds of elements in there. IF you need that - a SP and a table variable that prepares and is joined is the better way.
As I said - careful with paging. Someone asking for page 100 and just pressing forward is repeating a TON of processing.
And: Attitude adjustment. The time where 10 million rows where large are around 20 years ago.
Related
I have a database in SQL Server 2012 and want to update a table in it.
My table has three columns, the first column is of type nchar(24). It is filled with billion of rows. The other two columns are from the same type, but they are null (empty) at this moment.
I need to read the data from the first column, with this information I do some calculations. The result of my calculations are two strings, this two strings are the data I want to insert into the two empty columns.
My question is what is the fastest way to read the information from the first column of the table and update the second and third column.
Read and update step by step? Read a few rows, do the calculation, update the rows while reading the next few rows?
As it comes to billion of rows, performance is the only important thing here.
Let me know if you need any more information!
EDIT 1:
My calculation canĀ“t be expressed in SQL.
As the SQL server is on the local machine, the througput is nothing we have to be worried about. One calculation take about 0.02154 seconds, I have a total number of 2.809.475.760 rows this is about 280 GB of data.
Normally, DML is best performed in bigger batches. Depending on your indexing structure, a small batch size (maybe 1000?!) can already deliver the best results, or you might need bigger batch sizes (up to the point where you write all rows of the table in one statement).
Bulk updates can be performed by bulk-inserting information about the updates you want to make, and then updating all rows in the batch in one statement. Alternative strategies exist.
As you can't hold all rows to be updated in memory at the same time you probably need to look into MARS to be able to perform streaming reads while writing occasionally at the same time. Or, you can do it with two connections. Be careful to not deadlock across connections. SQL Server cannot detect that by principle. Only a timeout will resolve such a (distributed) deadlock. Making the reader run under snapshot isolation is a good strategy here. Snapshot isolation causes reader to not block or be blocked.
Linq is pretty efficient from my experiences. I wouldn't worry too much about optimizing your code yet. In fact that is typically something you should avoid is prematurely optimizing your code, just get it to work first then refactor as needed. As a side note, I once tested a stored procedure against a Linq query, and Linq won (to my amazement)
There is no simple how and a one-solution-fits all here.
If there are billions of rows, does performance matter? It doesn't seem to me that it has to be done within a second.
What is the expected throughput of the database and network. If your behind a POTS dial-in link the case is massively different when on 10Gb fiber.
The computations? How expensive are they? Just c=a+b or heavy processing of other text files.
Just a couple of questions raised in response. As such there is a lot more involved that we are not aware of to answer correctly.
Try a couple of things and measure it.
As a general rule: Writing to a database can be improved by batching instead of single updates.
Using a async pattern can free up some of the time for calculations instead of waiting.
EDIT in reply to comment
If calculations take 20ms biggest problem is IO. Multithreading won't bring you much.
Read the records in sequence using snapshot isolation so it's not hampered by write locks and update in batches. My guess is that the reader stays ahead of the writer without much trouble, reading in batches adds complexity without gaining much.
Find the sweet spot for the right batchsize by experimenting.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What are the performance characteristics of sqlite with very large database files?
I want to create a .Net application that uses a database that will contain around 700 million records in one of its tables. I wonder if the performance of SQLite would satisfy this scenario or should I use SQL Server. I like the portability that SQLite gives me.
Go for SQL Server for sure. 700 million records in SQLite is too much.
With SQLite you have following limitation
Single process write.
No mirroring
No replication
Check out this thread: What are the performance characteristics of sqlite with very large database files?
700m is a lot.
To give you an idea. Let's say your record size was 4 bytes (essentially storing a single value), then your DB is going to be over 2GB. If your record size is something closer to 100 bytes then it's closer to 65GB... (that's not including space used by indexes, and transaction log files, etc).
We do a lot of work with large databases and I'd never consider SQLLite for anything of that size. Quite frankly, "Portability" is the least of your concerns here. In order to query a DB of that size with any sort of responsiveness you will need an appropriately sized database server. I'd start with 32GB of RAM and fast drives.
If it's write heavy 90%+, you might get away with smaller RAM. If it's read heavy then you will want to try and build it out so that the machine can load as much of the DB (or at least indexes) in RAM as possible. Otherwise you'll be dependent on disk spindle speeds.
SQLite SHOULD be able to handle this much data. However, you may have to configure it to allow it to grow to this size, and you shouldn't have this much data in an "in-memory" instance of SQLite, just on general principles.
For more detail, see this page which explains the practical limits of the SQLite engine. The relevant config settings are the page size (normally 64KB) and page count (up to a 64-bit int's max value of approx 2.1 billion). Do the math, and the entire database can take up more than 140TB. A database consisting of a single table with 700m rows would be on the order of tens of gigs; easily manageable.
However, just because SQLite CAN store that much data doesn't mean you SHOULD. The biggest drawback of SQLite for large datastores is that the SQLite code runs as part of your process, using the thread on which it's called and taking up memory in your sandbox. You don't get the tools that are available in server-oriented DBMSes to "divide and conquer" large queries or datastores, like replication/clustering. In dealing with a large table like this, insertion/deletion will take a very long time to put it in the right place and update all the indexes. Selection MAY be livable, but only in indexed queries; a page or table scan will absolutely kill you.
I've had tables with similar record counts and no problems retrieval wise.
For starters, the hardware and allocation to the server is where you can start. See this for examples: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/
Regardless of size or number of records as long as you:
create indexes on foreign key(s),
store common queries in Views (http://en.wikipedia.org/wiki/View_%28database%29),
and maintain the database and tables regularly
you should be fine. Also, setting the proper column type/size for each column will help.
So I am troubleshooting some performance problems on a legacy application, and I have uncovered a pretty specific problem (there may be others).
Essentially, the application is using an object relational mapper to fetch data, but it is doing so in a very inefficient/incorrect way. In effect, it is performing a series of entity graph fetches to fill a datagrid in the UI, and on databinding the grid (it is ASP.Net Webforms) it is doing additional fetches, which lead to other fetches, etc.
The net effect of this is that many, many tiny queries are being performed. Using SQL Profiler shows that a certain page performs over 10,000 queries (to fill a single grid. No query takes over 10ms to complete, and most of them register as 0ms in Profiler. Each query will use and release one connection, and the series of queries would be single-threaded (per http request).
I am very familiar with the ORM, and know exactly how to fix the problem.
My question is: what is the exact effect of having many, many small queries being executed in an application? In what ways does it/can it stress the different components of the system?
For example, what is the effect on the webserver's CPU and memory? Would it flood the connection pool and cause blocking? What would be the impact on the database server's memory, CPU and I/O?
I am looking for relatively general answers, mainly because I want to start monitoring the areas that are likely to be the most affected (I need to measure => fix => re-measure). Concurrent use of the system at peak would likely be around 100-200 users.
It will depend on the database but generally there is a parse phase for each query. If the query has used bind variables it will probably be cached. If not, you wear the hit of a parse and that often means short locks on resources. i.e. BAD. In Oracle, CPU and blocking are much more prevelant at the parse than the execute. SQL Server less so but it's worse at the execute. Obviously doing 10K of anything over a network is going to be a terrible solution, especially x 200 users. Volume I'm sure is fine but that frequency will really highlight all the overhead in comms latency and stuff like that. Connection pools generally are in the hundreds, not tens of thousands, and now you have 10s of thousands of objects all being created, queued, managed, destroyed, garbage collected etc.
But I'm sure you already know all this deep down. Ditch the ORM for this part and write a stored procedure to execute the single query to return your result set. Then put it on the grid.
We currently use List<T> to store events from a simulation project we are running. We need to optimise memory utilisation and the time it takes to process the events in order to derive certain key metrics.
We thought of moving the event log to a SQL Server Compact database table and then possibly use Linq to calculate the metrics. From your experience do you think it will be faster to use SQL Server Compact than C#'s built-in data structures or are we going to have issues?
Some ideas.
MSMQ (Microsoft Message Queue)
You can have a thread dequeueing off of MSMQ and updating metrics on the fly. If you need to store these events for later paroosal you can put them into the database as you dequeue them. MSMQ demonstrates much better scalability in these scenarios - especially when the publisher and subscriber have assymetric processing speeds; and binary data is being used (as SQL can get bogged down with allocating space for VARBINARY, or allocating/splitting pages for indexes).
The two other SQL scenarios are complimentary to this one - you can still use dequeueing to insert into SQL; to avoid any hiccups in your simulation while SQL allocates space.
You can side-step what #Aliostad said using this one, to a certain degree.
OLAP (Online Analytical Processing)
Sounds like you might benefit from from OLAP (cubes etc.). This will increase the overall runtime of your simulation but will improve the value of the data. Unfortunately this means forking out cash for one of the bigger SQL editions.
Stored Procedures
While Linq-to-SQL is great for 'your average developer' please keep away from it in scientific projects. There are a host of great tricks you can use in raw TSQL, in addition to being able to inspect the query plan. If you want the best possible performance plan your DB carefully and create stored procedures/UDFs to aggregate your data.
If you can only calculate some of the metrics in C#, do as much work in SQL before-hand - and then feel free to use Linq-to-SQL to grab the data.
Also remember if you are inserting off the end of a MSMQ you can agressively index, which will speed up your metric calculations without impacting your simulation.
I would only involve SQL if there is a real need for better memory utilization (i.e. you are actually running out of it).
Memory Mapped Files
This allows you to offset memory pressure onto disk; at a performance penalty if it needs to be 'paged' back in.
Overall
I could steer clear of Linq to define basic metrics - do it in SQL. MSMQ is without a doubt a huge winner in this case. Don't overcomplicate the memory issue and keep it in .Net if you are not running out of memory.
If you need to process all of the events a C# List<> will be faster than Sql Server. An Array<> will have better performance, especially if the elements are structs and not classes, since structs are put in arrays where class instances only are referenced from the array. Having the structs within the array reduces garbage collection and increases cache locality.
If you only need to process part of the events, I think the solutions are in this order when it come to speed:
C# data structures, crafted especially for your needs.
Sql Server
Naive C# data structures, traversing a list searching for the right elements.
It sounds like you're thinking you need to have them in a database in order to use Linq. This isn't the case. You can use Linq with csharp's built in structures.
Depends on what you mean "faster use". If this is about performance of access to data, it's all about how much data you have, on big data the DB solution, only for statistical purposes, is definitely good choice.
Like DB, for this kind of purposes I would suggest SQLite: as this is single file (no services need like SQL Server compact) fully ACID supported DB. But again, this depends on your data size, as SQLite has limit of data inferior to that one of SQLServer.
Regards.
We need to optimise memory utilisation
Use Sql-Server-CE
the time it takes to process the events
Use Linq-To-Objects.
These two objectives are conflicting and you need to choose one that matters more to you.
I have almost 100.000 records in the database and I need to compare them to each-other with the Longest Common Subsequence algorithm, and I need to do that with 1000 new records every day.
My application is written in c# .Net, and the problem is that this comparing is working slow on the application level, for comparing of 1000 records are needed more than 10 hours.
So does anyone knows how much faster will this go if I wrote this algorithm in Stored procedure in SQL, or is there any other way?
You might want to try and write a stored proc in C# if you are using SQL server 2005 or 2008. This might scale better in the long run as you get more and more records and can't keep them all in memory.
Check out the MSDN Introduction to SQL Server CLR Integration.
This will use more CPU on your DB server, but you don't have to transfer data back and forth.
If you have 'just' 100.000 records. Just collect them all when your app starts. Do your algorithms in memory, and store any results/alterations to the db when you finish.
It'll be much faster
I'm not sure TSQL will allow you the same flexibility as C# allows you, especially when you deal with complex algorithms like LCS. Store all needed records in memory and deal with them from there.
Now most important thing is that you can think out of box for a minute and go for other approach, try to insert flags(ranking) of some kind once new item is inserted. Noone can advice you here since you haven't provided use with little bit of data what are you doing and what are you comparing. Probably you can ease on process with some ranking made during new item insertion. I don't mean to make full comparison once new item added but to trigger event like every hour or so you update table without user input.
Its true that, stored procedure works faster than LinQ or View. That is the way, to collect your data fast.
How do you determine that two of your records follow on from each other (i.e. that they're part of a sub-sequence)? Maybe you don't need to compare the whole 1MB of each record and could speed things up by only analysing some portion of that?
Sounds to me like your algorithm's flawed or that a DB might not be the best way of storing your data if it's taking 2 seconds to compare each record?