SQLite and Multiple Users - c#

I have a C# Winforms application. Part of the application pulls the contents of a SQLite table and displays it to the screen on a datagridview. I seem to have a problem where multiple users/computers are using the application.
When the program loads, it opens a single connection to the SQLite DB engine, which remains open until the user exits the program. On load, it refreshes the table in question and continues to do so at regular intervals. The table correctly updates when one user is using it or if that one user has more than one instance of the program open. If, however, more than one person uses it, the table doesn't seem to reflect changes made by other users until the program is closed and reopened.
An example - the first user (user A) logs in. the table has 5 entries. they add one to it. there are now 6 entries. User B now logs in and sees 6 entries. User A enters another record, for a total of 7. User B sees 6 even after the automatic refresh. And won't see 7 until they close out and reopen the program. User A sees 7 without any issue.
Any idea what could be causing this problem? It has to be something related to the DB engine for SQLite as I'm 100% sure my auto refresh is working properly. I suspect it has something to do with the write-ahead-logging feature or the connection pooling (which I have enabled). I disabled both to see what would happen, and the same issue occurs.

It could be a file lock issue - the first application may be taking an exclusive write lock, blocking out the other application instances. If this is true, then SQLite's may be simply waiting until the lock is released before updating the data file, which is not an ideal behaviour, but then again using SQLite for multi-user applications is also not ideal.
I have found hints that SHARED locks can (or should in the most recent version) be used. This may be a simple solution, but documentation of this is not easy to find (probably because this bends the specification of SQLite too far?)
Despite this however, it may be better to serialize file access yourself. And this depends on your precise system architecture, in how you should best approach such a feature.
Your system architecture is not clear from your description. You speak of "multiple users/computers" accessing the SQLite file.
If the multiple computers requirement is implemented using network share of the SQLfile, then this is indeed going to be a growing problem. A better architecture or another RDBMS would be advisable.
If multiple computers are accessing the data through a server process (or multiple server processes on the same machine?), then a simple Monitor lock (lock keyword), or ReaderWriterLock will help (in the case of multiple server processes an OS mutex would be required).
Update
Given your original question, the above still holds true. However given your situation, looking at your root problem - no access to your businesses RDBMS, I have some other suggestions:
Maria DB / mySQL / postgreSQL on your PC - of course this would require your PC to be kept on.
Some sort of database and/or service layer hosted in a datacentre (there are many options here, such as VPS, Azure DB, shared hosting DB etc.., of course all incurring a cost [perhaps there are some small free ones out there])

SQLite across network file systems is not a great solution. You'll find that the FAQ and Appropriate Uses pages gently steer you away from using SQLite as a concurrently accessed database across an NFS.
While in theory it could work, the implementation and latency of network file systems dramatically increases the chance of locking conflicts occurring during write actions.
I should point out that reading the database creates a read-only lock, which is fine for concurrent access.

Related

How to implement locking across a network

I have a desktop application. In this application there many records that users can open and work on. If a user clicks on a record the program will lock the record so no one else can use it. If the record is already locked then the user may still view it but it will be read-only. Many users on our local network can open and work on records.
My first thought is to use the database to manage locks on records. But I am not sure how or if this is the best approach. Is there any programming patterns or ready made solutions I can use?
I've implemented a similar system for a WPF application accessing a database, however I no longer have access to the source code, I'll try to explain here. The route I took was somewhat different from using the database. Using a Duplex WCF service you can host a service somewhere (i.e. the database server) from which clients connect. Key things to understand:
You can make this service generic by having some kind of data type and by making sure each row type has the same type of primary key (e.g. a long). In that case, you could have a signature similar to: bool AcquireLock(string dataType, long id) or replacing the bool/long by bool[] and long[] if users frequently modify a larger number of rows.
On the server side, you must be able to quickly respond to this request. Consider storing the data in something along the lines of a Dictionary<String (DataType), Dictionary<User, HashSet<long>> where the root string is a datatype.
When someone connects, he can receive a list of all locks for a given data type (e.g. when a screen opens that locks that type of records), while also registering to receive updates for a given data type.
The socket connection between the client as the server defines that the user is 'connected'. If the socket closes, the server releases all locks for that user, immediately notifying others that the user has lost his lock, making the record available again for editing. (This covers scenarios such as a user disconnecting or killing a process).
To avoid concurrency issues, make sure a user acquired the lock before allowing him to make any changes. (e.g. BeginEdit, check with the server first, by implementing IEditableObject on your view model).
When a lock is released, the client tells the server if he made changes to the row, so that other clients can update the respective data. When the socket disconnects, assume no changes.
Nice feature to add: when providing users with a list / update of locks, also provide the user id, so that people can see who is working on what.
This form of 'real time concurrency' provides a much better user experience than providing a way to handle optimistic concurrency problems, and might also be technically easier to implement, depending on your scenario.

Cannot open the shared memory region error

I have a user reporting this error when they're using my application.
The application is a .NET Winforms application running on Windows XP Embedded, using SQL Server CE 3.5 sp1, and Linq-To-SQL as the ORM. The database itself is located in a subdirectory my application creates in the My Documents folder. The user account is an adminstrator account on the system. There are no other applications or processes connecting to the database.
For the most part, the application seems to run fine. It starts up, can load data from and save data to the database. The user is using the application to access the database maybe a couple hundred times a day. They get this error, but only intermittently. Maybe 3-4 times a day.
In the code itself, all of the calls to the database are using a Linq-To-SQL data context that's wrapped in a using clause. So in other words:
using(MyDataContext db = new MyDataContext(ConnectionString))
{
List<blah> someList = db.SomeTable.Where(//selection criteria).ToList();
return(someList);
}
That's what pretty much all of the calls to the database look like (with the exception that the ones that save data obviously aren't selecting and returning anything). As I mentioned before, they have no issue 99% of the time but only get the shared memory error a few times a day.
My current "fix" is on application startup I simply read all of the data out of the database (there's not a lot) and cache it in memory and converted my database calls to read from the in-memory lists. So far, this seems to have fixed the problem. For a day and a half now they've reported no problems. But this is still bugging me, because I don't know what would cause the error in the first place.
While the application is accessing the database a few hundred times a day, it's typically not in rapid-fire succession. It's usually once every few minutes at the least. However, there is one use-case where there might be two calls one right after the other, as fast as possible. In other words, something like:
//user makes a selectio n on the screen
DatabaseCall1();
DatabaseCall2();
Both of those would follow the pattern in the code block above where they create a new context, do work, and then return. But these calls aren't asynchronous, so I would expect the connection would be closed and disposed of before DatabaseCall2 is invoked. However, could it be that something on the SQL Server CE end isn't closing the connection fast enough? It might explain why it's intermittent since maybe most of the time it doesn't have a problem? I should also mention that this exact program without the fix is installed on a few other systems with the exact same hardware and software (they're clones of each other), and users of the other systems have not reported any errors.
I'm stuck scratching my head because I can't reproduce this error on my development machine or a test machine, and answers to questions about this exception here and other places typically revolve around insufficient user permissions or the database on a shared network folder.
Check this previous post,I think you will find your answer :-
SQL Server CE - Internal error: Cannot open the shared memory region

SQLite & C# ] How can I control the number of people editing a db file?

I'm programming a simple customer-information management software now with SQLite.
One exe file, one db file, some dll files. - That's it :)
2~4 people may be going to run this exe file simultaneously and access to a database.
Not only just reading but frequent editing will be done by them too.
Yeahhh now here comes the one of the most famous problems... "Synchronization"
I was trying to create / remove a temporary empty file whenever someone is trying
to edit it. (this is a 'key' to access the db.)
But there must be a better way for it : (
What would be the best way of preventing this problem?
Well, SQLite already locks the database file for each use, the idea being that multiple applications can share the same database.
However, the documentation for SQLite explicitly warns about using this over the network:
SQLite will work over a network
filesystem, but because of the latency
associated with most network
filesystems, performance will not be
great. Also, the file locking logic of
many network filesystems
implementation contains bugs (on both
Unix and Windows). If file locking
does not work like it should, it might
be possible for two or more client
programs to modify the same part of
the same database at the same time,
resulting in database corruption.
Because this problem results from bugs
in the underlying filesystem
implementation, there is nothing
SQLite can do to prevent it.
A good rule of thumb is that you
should avoid using SQLite in
situations where the same database
will be accessed simultaneously from
many computers over a network
filesystem.
So assuming your "2-4 people" are on different computers, using a network file share, I'd recommend that you don't use SQLite. Use a traditional client/server RDBMS instead, which is designed for multiple concurrent connections from multiple hosts.
Your app will still need to consider concurrency issues (unless it speculatively acquires locks on whatever the user is currently looking at, which is generally a nasty idea) but at least you won't have to deal with network file system locking issues as well.
You are looking at some classic problems in dealing with multiple users accessing a database: the Lost Update.
See this tutorial on concurrency:
http://www.brainbell.com/tutors/php/php_mysql/Transactions_and_Concurrency.html
At least you won't have to worry about the db file itself getting corrupted by this, because SQLite locks the whole file when it's being written. That being said, SQLite doesn't recommend you to use it if you expect your app to be accessed simultaneously by a multiple clients.

MongoDB in desktop application

Is it a good idea to use MongoDB in .NET desktop application?
Mongo is meant to be run on a server with replication. It isn't really intended as a database for desktop applications (unless they're connecting to a database on a central server). There's a blog post on durability on the MongoDB blog, it's a common question.
When a write occurs and the write command returns, we can not be 100%
sure that from that moment in time on,
all other processes will see the
updated data only.
In every driver, there should be an option to do a "safe" insert or update, which waits for a database response. I don't know which driver you're planning on using (there are a few for .NET, http://github.com/samus/mongodb-csharp is the most officially supported), but the driver doesn't offer a safe option, you can run the getLastError command to synchronize things manually.
MongoDB won’t make sure your data is on the hard drive immediately. As a
result, you can lose data that you
thought was already written if your
server goes down in the period between
writing and actual storing to the hard
drive.
There is an fsync command, which you can run after every operation if you really want. Again, Mongo goes with the "safety in numbers" philosophy and encourages anyone running in production to have at least one slave for backup.
It depends on what you want to store in a database.
According to Wikipedia;
MongoDB is designed for problems
without heavy transactional
requirements that aren't easily solved
by traditional RDBMSs, including
problems which require the database to
span many servers.
There is a .NET driver available. And here is some information to help you getting started.
But you should first ask yourself; what do you want to store and what are the further requirements. (support for Stored Procedures, Triggers, expected size, etc etc)

What is the most cost-effective way to break up a centralised database?

Following on from this question...
What to do when you’ve really screwed up the design of a distributed system?
... the client has reluctantly asked me to quote for option 3 (the expensive one), so they can compare prices to a company in India.
So, they want me to quote (hmm). In order for me to get this as accurate as possible, I will need to decide how I'm actually going to do it. Here's 3 scenarios...
Scenarios
Split the database
My original idea (perhaps the most tricky) will yield the best speed on both the website and the desktop application. However, it may require some synchronising between the two databases as the two "systems" so heavily connected. If not done properly and not tested thouroughly, I've learnt that synchronisation can be hell on earth.
Implement caching on the smallest system
To side-step the sync option (which I'm not fond of), I figured it may be more productive (and cheaper) to move the entire central database and web service to their office (i.e. in-house), and have the website (still on the hosted server) download data from the central office and store it in a small database (acting as a cache)...
Set up a new server in the customer's office (in-house).
Move the central database and web service to the new in-house server.
Keep the web site on the hosted server, but alter the web service URL so that it points to the office server.
Implement a simple cache system for images and most frequently accessed data (such as product information).
... the down-side is that when the end-user in the office updates something, their customers will effectively be downloading the data from a 60KB/s upload connection (albeit once, as it will be cached).
Also, not all data can be cached, for example when a customer updates their order. Also, connection redundancy becomes a huge factor here; what if the office connection is offline? Nothing to do but show an error message to the customers, which is nasty, but a necessary evil.
Mystery option number 3
Suggestions welcome!
SQL replication
I had considered MSSQL replication. But I have no experience with it, so I'm worried about how conflicts are handled, etc. Is this an option? Considering there are physical files involved, and so on. Also, I believe we'd need to upgrade from SQL express to SQL non-free, and buy two licenses.
Technical
Components
ASP.Net website
ASP.net web service
.Net desktop application
MSSQL 2008 express database
Connections
Office connection: 8 mbit down and 1 mbit up contended line (50:1)
Hosted virtual server: Windows 2008 with 10 megabit line
Having just read for the first time your original question related to this I'd say that you may have laid the foundation for resolving the problem simply because you are communicating with the database by a web service.
This web service may well be the saving grace as it allows you to split the communications without affecting the client.
A good while back I was involved in designing just such a system.
The first thing that we identified was that data which rarely changes - and immediately locked all of this out of consideration for distribution. A manual process for administering using the web server was the only way to change this data.
The second thing we identified was that data that should be owned locally. By this I mean data that only one person or location at a time would need to update; but that may need to be viewed at other locations. We fixed all of the keys on the related tables to ensure that duplication could never occur and that no auto-incrementing fields were used.
The third item was the tables that were truly shared - and although we worried a lot about these during stages 1 & 2 - in our case this part was straight-forwards.
When I'm talking about a server here I mean a DB Server with a set of web services that communicate between themselves.
As designed our architecture had 1 designated 'master' server. This was the definitive for resolving conflicts.
The rest of the servers were in the first instance a large cache of anything covered by item1. In fact it wasn't a large cache but a database duplication but you get the idea.
The second function of the each non-master server was to coordinate changes with the master. This involved a very simplistic process of actually passing through most of the work transparently to the master server.
We spent a lot of time designing and optimising all of the above - to finally discover that the single best performance improvement came from simply compressing the web service requests to reduce bandwidth (but it was over a single channel ISDN, which probably made the most difference).
The fact is that if you do have a web service then this will give you greater flexibility about how you implement this.
I'd probably start by investigating the feasability of implementing one of the SQL server replication methods
Usual disclaimers apply:
Splitting the database will not help a lot but it'll add a lot of nightmare. IMO, you should first try to optimize the database, update some indexes or may be add several more, optimize some queries and so on. For database performance tuning I recommend to read some articles from simple-talk.com.
Also in order to save bandwidth you can add bulk processing to your windows client and also add zipping (archiving) to your web service.
And probably you should upgrade to MS SQL 2008 Express, it's also free.
It's hard to recommend a good solution for your problem using the information I have. It's not clear where is the bottleneck. I strongly recommend you to profile your application to find exact place of the bottleneck (e.g. is it in the database or in fully used up channel and so on) and add a description of it to the question.
EDIT 01/03:
When the bottleneck is an up connection then you can do only the following:
1. Add archiving of messages to service and client
2. Implement bulk operations and use them
3. Try to reduce operations count per user case for the most frequent cases
4. Add a local database for windows clients and perform all operations using it and synchronize the local db and the main one on some timer.
And sql replication will not help you a lot in this case. The most fastest and cheapest solution is to increase up connection because all other ways (except the first one) will take a lot of time.
If you choose to rewrite the service to support bulking I recommend you to have a look at Agatha Project
Actually hearing how many they have on that one connection it may be time to up the bandwidth at the office (not at all my normal response) If you factor out the CRM system what else is a top user of the bandwidth? It maybe the they have reached the point of needing more bandwidth period.
But I am still curious to see how much information you are passing that is getting used. Make sure you are transferring efferently any chance you could add some easy quick measures to see how much people are actually consuming when looking at the data.

Categories

Resources