I have made a SQLite database (~700MB, 3 tables, 3 indexes - 1 rtree index and 2 primary keys). I have marked it as a read-only file (on Windows).
Is it safe and performant to execute just SELECT commands on this database from multiple threads?
If so how can it be made more performant (any options or flags to enable, any tiny tunings)?
This application is in C# using System.Data.SQLite (1.0.82.0), compiled for .NET 4.0 on a x64 machine. And It works fine (not necessarily performant or correctly paralleled because I can not/do not know (how to) prove them). Currently I have no real bottleneck but soon I will! I need to search the rtree as fast as possible. (On my machine 4GB, 2 Cores) It takes sometimes more than 5 milliseconds to search the rtree. I have made that part multithreaded to process my data paralleled. And according to structure of the R-Tree (or I think R*-Tree in SQLite's case) if my database grows to some GB it should be no problem because these trees has low depths and are fast on large datasets. But if any improvements are possible, then it should be considered in this application.
I can not be sure that the part that has been made parallel is really running in parallel and for example SQLite (or System.Data.SQLite) has not an internal lock. In fact in some tests the parallel version runs slower!
This should be safe, provided each thread has its own connection or you use locks to prevent multiple threads from using the same connection at the same time.
Is it safe and performant to execute just SELECT commands on this database from multiple threads?
Most likely
how can it be made more performant (if it is possible)?
What are your bottlenecks? Disk I/O? Processor? Memory?
Making an application more performant is best done by 1) identifying the pieces that are performing poorly (and can be improved) and 2) making those pieces more performant. There are a multitude of tools out there that will identify the slowest parts of your code so you know what to tackle first. It makes no sense to shave 10ms off of a query when the program takes the results of that query and spends 10 seconds writing it to disk.
There's not a "magic wand" that you can wave over an application (especially a database-driven application) and make it run faster. You need to know what to fix first.
You can set the threading support level: http://www.sqlite.org/threadsafe.html
SQLite support three different threading modes:
Single-thread. In this mode, all mutexes are disabled and SQLite is unsafe to use in more than a single thread at once.
Multi-thread. In this mode, SQLite can be safely used by multiple threads provided that no single database connection is used simultaneously in two or more threads.
Serialized. In serialized mode, SQLite can be safely used by multiple threads with no restriction.
The threading mode can be selected at compile-time (when the SQLite library is being compiled from source code) or at start-time (when the application that intends to use SQLite is initializing) or at run-time (when a new SQLite database connection is being created). Generally speaking, run-time overrides start-time and start-time overrides compile-time. Except, single-thread mode cannot be overridden once selected.
The default mode is serialized.
The slowdown you are seeing is the serialization of requests. Change the threading model and things will speed up. Keep in mind "unsafe" probably means both readers and writers at the same time. I am not sure what is the best mode for ONLY readers.
Related
I am currently benchmarking two databases, Postgres and MongoDB, on a relatively large data set with equivalent queries. Of course, I am doing my best to put them on equal grounds, but I have one dilemma. For Postgres I take the execution time reported by EXPLAIN ANALYZE, and there is a similar concept with MongoDB, using profiling (although not equivalent, millis).
However, different times are observed if executed from, lets say, PgAdmin or the mongo CLI client or in my watched C# app. That time also includes the transfer latency, and probably protocol differences. PgAdmin, for example, actually seems to completely deform the execution time (it obviously includes the result rendering time).
The question is: is there any sense in actually measuring the time on the "receiving end", since an application actually does consume that data? Or does it just include too many variables and does not contribute anything to the actual database performance, and I should stick to the reported DBMS execution times?
The question you'd have to answer is why are you benchmarking the databases? If you are benchmarking so you can select one over the other, for use in a C# application, then you need to measure the time "on the 'receiving end'". Whatever variables that may contain, that is what you need to compare.
I'm trying to improve upon this program that I wrote for work. Initially I was rushed, and they don't care about performance or anything. So, I made a horrible decision to query an entire database(a SQLite database), and then store the results in lists for use in my functions. However, I'm now considering having each of my functions threaded, and having the functions query only the parts of the database that it needs. There are ~25 functions. My question is, is this safe to do? Also, is it possible to have that many concurrent connections? I will only be PULLING information from the database, never inserting or updating.
The way I've had it described to me[*] is to have each concurrent thread open its own connection to the database, as each connection can only process one query or modification at a time. The group of threads with their connections can then perform concurrent reads easily. If you've got a significant problem with many concurrent writes causing excessive blocking or failure to acquire locks, you're getting to the point where you're exceeding what SQLite does for you (and should consider a server-based DB like PostgreSQL).
Note that you can also have a master thread open the connections for the worker threads if that's more convenient, but it's advised (for your sanity's sake if nothing else!) to only actually use each connection from one thread.
[* For a normal build of SQLite. It's possible to switch things off at build time, of course.]
SQLite has no write concurrency, but it supports arbitrarily many connections that read at the same time.
Just ensure that every thread has its own connection.
25 simultanious connections is not a smart idea. That's a huge number.
I usually create a multi-layered design for this problem. I send all requests to the database through a kind of ObjectFactory class that has an internal cache. The ObjectFactory will forward the request to a ConnectionPoolHandler and will store the results in its cache. This connection pool handler uses X simultaneous connections but dispatches them to several threads.
However, some remarks must be made before applying this design. You first have to ask yourself the following 2 questions:
Is your application the only application that has access to this
database?
Is your application the only application that modifies data in this database?
If the first question is negatively, then you could encounter locking issues. If your second question is answered negatively, then it will be extremely difficult to apply caching. You may even prefer not to implement any caching it all.
Caching is especially interesting in case you are often requesting objects based on a unique reference, such as the primary key. In that case you can store the most often used objects in a Map. A popular collection for caching is an "LRUMap" ("Least-Recently-Used" map). The benifit of this collection is that it automatically arranges the most often used objects to the top. At the same time it has a maximum size and automatically removes items from the map that are rarely ever used.
A second advantage of caching is that each object exists only once. For example:
An Employee is fetched from the database.
The ObjectFactory converts the resultset to an actual object instance
The ObjectFactory immediatly stores it in cache.
A bit later, a bunch of employees are fetched using an SQL "... where name like "John%" statement.
Before converting the resultset to objects, the ObjectFactory first checks if the IDs of these records are perhaps already stored in cache.
Found a match ! Aha, this object does not need to be recreated.
There are several advantages to having a certain object only once in memory.
Last but not least in Java there is something like "Weak References". These are references that are references that in fact can be cleaned up by the garbage collector. I am not sure if it exists in C# and how it's called. By implementing this, you don't even have to care about the maximum amount of cached objects, your garbage collector will take care of it.
Question:
Is there a way to force the Task Parallel Library to run multiple tasks simultaneously? Even if it means making the whole process run slower with all the added context switching on each core?
Background:
I'm fairly new to multithreading, so I could use some assistance. My initial research hasn't turned up much, but I also doubt I know what exactly to search for. Perhaps someone more experienced with multithreading can help me better understand TPL and/or find a better solution.
Our company is planning on deploying a piece of software to all users' machines that will connect to a central server a few times a day, and synchronize some files and MS Access data back to the user's machine. We would like to load-test this concept first and see how the Access DB holds up to lots of simultaneous connections.
I've been tasked with writing a .NET application that behaves like the client app (connecting & syncing with a network location), but does this on multiple threads simultaneously.
I've been getting familiar with the Task Parallel Library (TPL), as this seems like the best (newest) way to handle multithreading, and get return values back from each thread easily. However as I understand it, TPL decides how to run each "task" for the fastest execution possible, splitting the work among the available cores. So lets say I want to run 30 sync jobs on a 2-core machine... the TPL would run 15 on each core, sequentially. This would mean my load test would only be hitting the Access DB with at most 2 connections at the same time. I want to hit the database with lots of simultaneous connections.
You can force the TPL to do this by specifying TaskOptions.LongRunning. According to Reflector (not according to the docs, though) this always creates a new thread. I consider relying on this safe production use.
Normal tasks will not do, because they don't guarantee execution. Setting MinThreads is a horrible solution (for production) because you are changing a process global setting to solve a local problem. And still, you are not guaranteed success.
Of course, you can also start threads. Tasks are more convenient though because of error handling. Nothing wrong with using threads for this use case.
Based on your comment, I think you should reconsider using Access in the first place. It doesn't scale well and has problems once the database grows to a certain size. Especially if this is simply served off some file share on your network.
You can try and simulate load from your single machine but I don't think that would be very representative of what you are trying to accomplish.
Have you considered using SQL Server Express? It's basically a de-tuned version of the full-blown SQL Server which might suit your needs better.
So I am troubleshooting some performance problems on a legacy application, and I have uncovered a pretty specific problem (there may be others).
Essentially, the application is using an object relational mapper to fetch data, but it is doing so in a very inefficient/incorrect way. In effect, it is performing a series of entity graph fetches to fill a datagrid in the UI, and on databinding the grid (it is ASP.Net Webforms) it is doing additional fetches, which lead to other fetches, etc.
The net effect of this is that many, many tiny queries are being performed. Using SQL Profiler shows that a certain page performs over 10,000 queries (to fill a single grid. No query takes over 10ms to complete, and most of them register as 0ms in Profiler. Each query will use and release one connection, and the series of queries would be single-threaded (per http request).
I am very familiar with the ORM, and know exactly how to fix the problem.
My question is: what is the exact effect of having many, many small queries being executed in an application? In what ways does it/can it stress the different components of the system?
For example, what is the effect on the webserver's CPU and memory? Would it flood the connection pool and cause blocking? What would be the impact on the database server's memory, CPU and I/O?
I am looking for relatively general answers, mainly because I want to start monitoring the areas that are likely to be the most affected (I need to measure => fix => re-measure). Concurrent use of the system at peak would likely be around 100-200 users.
It will depend on the database but generally there is a parse phase for each query. If the query has used bind variables it will probably be cached. If not, you wear the hit of a parse and that often means short locks on resources. i.e. BAD. In Oracle, CPU and blocking are much more prevelant at the parse than the execute. SQL Server less so but it's worse at the execute. Obviously doing 10K of anything over a network is going to be a terrible solution, especially x 200 users. Volume I'm sure is fine but that frequency will really highlight all the overhead in comms latency and stuff like that. Connection pools generally are in the hundreds, not tens of thousands, and now you have 10s of thousands of objects all being created, queued, managed, destroyed, garbage collected etc.
But I'm sure you already know all this deep down. Ditch the ORM for this part and write a stored procedure to execute the single query to return your result set. Then put it on the grid.
I have been given the task of re-writing some libraries written in C# so that there are no allocations once startup is completed.
I just got to one project that does some DB queries over an OdbcConnection every 30 seconds. I've always just used .ExecuteReader() which creates an OdbcDataReader. Is there any pattern (like the SocketAsyncEventArgs socket pattern) that lets you re-use your own OdbcDataReader? Or some other clever way to avoid allocations?
I haven't bothered to learn LINQ since all the dbs at work are Oracle based and the last I checked, there was no official Linq To Oracle provider. But if there's a way to do this in Linq, I could use one of the third-party ones.
Update:
I don't think I clearly specified the reasons for the no-alloc requirement. We have one critical thread running and it is very important that it not freeze. This is for a near realtime trading application, and we do see up to a 100 ms freeze for some Gen 2 collections. (I've also heard of games being written the same way in C#). There is one background thread that does some compliance checking and runs every 30 seconds. It does a db query right now. The query is quite slow (approx 500 ms to return with all the data), but that is okay because it doesn't interfere with the critical thread. Except if the worker thread is allocating memory, it will cause GCs which freeze all threads.
I've been told that all the libraries (including this one) cannot allocate memory after startup. Whether I agree with that or not, that's the requirement from the people who sign the checks :).
Now, clearly there are ways that I could get the data into this process without allocations. I could set up another process and connect it to this one using a socket. The new .NET 3.5 sockets were specifically optimized not to allocate at all, using the new SocketAsyncEventArgs pattern. (In fact, we are using them to connect to several systems and never see any GCs from them.) Then have a pre-allocated byte array that reads from the socket and go through the data, allocating no strings along the way. (I'm not familiar with other forms of IPC in .NET so I'm not sure if the memory mapped files and named pipes allocate or not).
But if there's a faster way to get this no-alloc query done without going through all that hassle, I'd prefer it.
You cannot reuse IDataReader (or OdbcDataReader or SqlDataReader or any equivalent class). They are designed to be used with a single query only. These objects encapsulate a single record set, so once you've obtained and iterated it, it has no meaning anymore.
Creating a data reader is an incredibly cheap operation anyway, vanishingly small in contrast to the cost of actually executing the query. I cannot see a logical reason for this "no allocations" requirement.
I'd go so far as to say that it's very nearly impossible to rewrite a library so as to allocate no memory. Even something as simple as boxing an integer or using a string variable is going to allocate some memory. Even if it were somehow possible to reuse the reader (which it isn't, as I explained), it would still have to issue the query to the database again, which would require memory allocations in the form of preparing the query, sending it over the network, retrieving the results again, etc.
Avoiding memory allocations is simply not a practical goal. Better to perhaps avoid specific types of memory allocations if and when you determine that some specific operation is using up too much memory.
For such a requirement, are you sure that a high-level language like C# is your choice?
You cannot say whether the .NET library functions you are using are internally allocating memory or not. The standard doesn't guarantee that, so if they are not using allocations in the current version of .NET framework, they may start doing so later.
I suggest you profile the application to determine where the time and/or memory are being spent. Don't guess - you will only guess wrong.