I have a WCF application that at present is using XML based file storage to store data that gets used to generate reports. Besides this processing decisions are made based on information stored in these XML files.
I'm now hitting volumes of around 30 000 text files. This is incredibly taxing, and the application at times comes to a grinding halt.
I've always wanted to swop out the XML DAL in favor of an RDBMS, but project managers simply won't allow it. But they would be willing to look at a serverless solution for example SQLLite. I am really tempted to just dive right in and start using it as a replacement DAL (Data Access Layer).
I would need no more than around 20 tables in the whole solution, and I would expect to get no more than around 20 000 - 100 000 transactions a day, however this is extreme, the real volumes would be less than this in most cases.
Update
I am not expecting a great deal of simultaneous connections, when I say transactions, I essentially mean 1 or 2 clients that make calls and execute against the database in order. At times there might be a possibility of external clients making quick calls to the DB. But the bulk of DB connections will be done by my WCF service, which is a back end scheduled task, not serving 100's of people across an organization.
Another good point is that I only need to retain data for 90 days, so the DB shouldn't grow too big.
My main concerns are:
How reliable is SQLLite? What if the DB File gets corrupted, will I loose all processing Data. How easy is the DB to back up? Will it handle my volumes? And lastly how well does the .net provider work (located here: http://sourceforge.net/projects/sqlite-dotnet2/).
If you have any experience with SQLLite, please post your experiences so I can make aan informed decision to switch or not.
Thanks in advance...
SQLite is as reliable as your OS and hardware.
Its transactional rate is similar to SQL server, and often faster because it's all in process.
The .NET ADO provider works great.
To back up the DB, stop the service and copy the file. If the journal file is present copy it too.
EDIT: SQLite uses UTF-8 by default so with the ADO-NET provider you should be able to avoid losing accents (just so long as you follow the typical XML in string rules).
You could consider Microsoft's Sql Compact Edition.
It's like sqlite, in terms of being a single file embedded database, but has better integration with the .net framework :)
SQLite seems reliable, and even with Microsoft's one, don't expect to receive much support in case of a corrupted database.
Given your transaction volume I'd say the fact that the DB itself is a single monolithic file with only file system locking available could be a problem.
There is no row based locking as far as I know.
I used SQLite with the .Net provider without problems in a monouser enviroment, except for one concern: accents, wich don't showed correcly. The backup is quite simply: the SQLite database is an plain text file. Simply copy it.
I use Sqlite for storing XML config data and have had no problems with it. I use the System.Data.Sqlite provider: http://sqlite.phxsoftware.com/. It's solid and has a good support forum. It also includes a LINQ provider. It also integrates with VS 2008 so you can use Server Explorer to query tables. The examples and documentation also show how to use parameterized commands and transactions for increased performance.
The release candidate for LinqPad now supports Sqlite: http://www.linqpad.net/Beta.aspx.
Sqlite stores everything in a single file, which can be backed up like any other binary file.
Sqlite only supports file-level locking, but shouldn't present a performance problem since it doesn't sound like you'll have a large number of simultaneous transactions.
Unicode shouldn't be a problem. This link in the forum addresses an area where someone was trying to read unicode characters with an incompatible utility http://sqlite.phxsoftware.com/forums/t/954.aspx.
This site shows how to do case-insenitive UTF8 comparisons using System.Data.Sqlite via a custom collator, with Russian characters as an example: http://www.codeproject.com/KB/database/SQLiteUTF8CIComparison.aspx.
Related
All other applications I've written have been network/web apps where I've used an SQL database to store data and that has made sense to me.
If I'm creating an application that does not need to be networked ever, is there a standard way to store this data permanently? A text file is possible, but doesn't give me the benefits of querying an SQL server nor is it very secure. Is there something similar to an SQL server that I can initiate and save on starting up my program?
Perhaps there is some structure I've never come across?
From what I've read I might be able to do something as mentioned above like with SQLite. Does that make sense for large and distributed applications?
Thanks in advance for any clarifications on how to design these types of programs.
Edit: to clarify what #TomTom was saying, it is not a large amount of data like he is suggesting. I would be surprised if it ever got over several gigs of data. My point in saying large was that it seemed unreasonable to create some sort of a data structure that I could save into a text file, load up/search through to grab my data compared to using an SQL-like database.
Reading through the answers apparently SQLite or something similar is reasonable to use for desktop applications. I'll continue looking into it and probably use it to track data for my next project.
You can use an embedded database - this can be a SQL database, but does not have to be.
For windows, look at SQLite, SQL Server Compact and RavenDB (for a non SQL, document database).
You could still use SQL database, but locally. Try SQLite.
Other option to use Windows built-in database engine which name is Esent. Fast, but not really convenient to use its Api.
I am using a xml file as a database currently in development.
The xml file is going to be modified by multiple users over the network.(Not on a server per say but on my computer where they have access over the network)
I kinda know it is a bad idea to use xml for this but the structure of xml is much better/cleaner/something I like.
Wondering, what are my options ? As in would I be able to continue with the xml with some weird background custom connection ? (Which would verify all the necessary details to allow me to write/read from the xml without issues)
Or am I stuck in using some SQL type of database? If I am stuck in using that would there be some sort of database that is somewhat similar to XML...
EDIT: Reason for liking xml.
Grouped easily for the eyes.
<SomeDocument name="Something">
<URL>bbbb</URL>
<Something>2342</Soemthing>
<Something_That_would_of_been_in_another_database>derp</...>
rather than linking 3-4 tables together...
There are some examples of XML based databases that support multi-user environments. One is the OneNote Revision File Format used by Microsoft OneNote. Although there is a very detailed documentation on that, it is tremendously complicated to support multiple users editing a single file. Basically one could argue that an XML based storage is not viable option when you need multi-user support.
If you are stuck with the XML file you could look into the OneNote file format, but it isn't a traditional XML format, since it also uses a "binary wrapper", meaning that the actual content is defined in XML data within the binary file, but transactions/revisions/free chunks are represented binary. This is necessary since you have to allocate specific portions of the file for users to write to, while you have the file open.
If you don't want to use a dedicated server software, you could use various file-based databases like SQL CE or SQLite.
You would need to deal with concurrency issues if you used a file that several users had access to. Guarantees need to be made for one user not overwriting another user's changes made around the same time.
My suggestion is to use a proper database (e.g. SQL Server) that will handle these issues for you.
I am not familiar with the C# soultions, but for our java application we use eXist-db and query it with xquery. I'm not too familiar with it, but some use markLogic. Still more use Berkley db.
The question whether or not to use a native XML database, an XML-enabled database, a so-called NoSQL database, or any of the more traditional methods can rely on multiple factors. Just to mention two:
Most importantly, do you have your data in XML, and do you want to keep it that way? If so, use an XML-enabled solution.
Do you need scalability or performance? If so, you will need a solution that can deal with that. There are lots of NoSQL and XML databases that are well capable of handling that.
As for concurrency: any database should deal with that natively.
A number of databases have been mentioned already. To single out a few, MarkLogic Server ( www.marklogic.com ) is built to scale and perform upto Terabyte scale (and beyond), and has connectors for amongst others Java and .Net. The solution from 28msec ( www.28msec.com based on Zorba) runs in the cloud, and should scale too.
But most interesting to mention here is that these databases are often used through HTTP / REST interfaces. That allows easy integration from any programming language, and makes interchanging easier too.
I have a system (using Entity Framework) that is deployed in various production systems and also on a quality control system. My problem is that data entry is often done only on one of those occurrences of my system (different databases).
I want to find the best way to transfer my data from one database to another database. Ids can change, as long as the relations between my objects are maintained. 98% of my data in in DB, some of it is external files, I can manage those separately, manually.
Currently we use a xml structure as a transition file. The file is then imported in the destination system, and code manually imports the entities and re-creates the data.
I'm looking for a more generic way to do this, with less code. Since all my data in stored in Entities couldn't I simply create a big List and throw all my objects in there, then serialize that in some matter into an external file and finally generically import all the entities in there in my destination system? (I'll probably have to be careful in maintaining relation ids, but should be ok...)
Anyways I'm wondering if anyone would have smart approaches, I'm pretty sure I,m not the first with a similar problem.
Thanks!
You need to get some process around this. If all environments contain the same data (unlikely) you can replicate. It is the most automatic. But a QA environ should not update production, so you have to really think this through.
If semi-automated is okay, there are tools out there you can use from a variety of vendors. I use Red Gate tools, personally, but others are also fine.
Can you set up a more automated push with EF? Sure, but the amount of time you spend is really not worth it.
In my opinion you can check some of the following approaches:
1) Use Sql Compare or Sql Data Compare. Those tools are from Red Gate and can be found here
2) Regular backups and restores of the databases. You could, if it is an option regularly backup your most up-to-date database and restore it on the destination systems. I have no experience in automatizing this but here is a link to do that through .net.
3) You could always give it a go creating a version control system of your own. I would picture one such system selecting all records from a certain table (or all of them), deleting all records in the target database and inserting them. This seems pretty complex though, as you have to worry about relationships, data dependencies, etc.
Hope this helps in some way.
Regards
If you for some reason will not be satisfied with existing tools may be you'll want take a look at the Sync Framework and implement this functionality yourself for your very particular data bases.
Given what you described, pushing data from One SQL Server to another for demo purposes, you should consider SQL Server Integration Services.
If you're got a simple scenario where you just move the data and objects from DB to the next you can use their built-in Wizards. If you need to do custom stuff you can build complex workflows using C# and SQL (tools you already know). Note: most of what you're going to want comes with the standard edition so if you're using express this is less interesting.
The story for Red Gate products is more compelling when you don't have SQL Server (So you have to go out and buy something) and if you are interested in finding out what the changes are between DB's (like viewing code changes in a .cs file in a source control product)
I have a C# application that allows one user to enter information about customers and job sites. The information is very basic.
Customer: Name, number, address, email, associated job site.
Job Site: Name, location.
Here are my specs I need for this program.
No limit on amount of data entered.
Single user per application. No concurrent activity or multiple users.
Allow user entries/data to be exported to an external file that can be easily shared between applications/users.
Allows for user queries to display customers based on different combinations of customer information/job site information.
The data will never be viewed or manipulated outside of the application.
The program will be running almost always, minimized to the task bar.
Startup time is not very important, however I would like the queries to be considerably fast.
This all seems to point me towards a database, but a very lightweight one. However I also need it to have no limitations as far as data storage. If you agree I should use a database, please let me know what would be best suited for my needs. If you don't think I should use a database, please make some other suggestions on what you think would be best.
My suggestion would be to use SQLite. You can find it here: http://sqlite.org/. And you can find the C# wrapper version here: http://sqlite.phxsoftware.com/
SQLite is very lightweight and has some pretty powerful stuff for such a lightweight engine. Another option you can look into is Microsoft Access.
You're asking the wrong question again :)
The better question is "how do I build an application that lets me change the data storage implementation?"
If you apply the repository pattern and properly interface it you can build interchangable persistence layers. So you could start with one implementation and change it as-needed wihtout needing to re-engineer the business or application layers.
Once you have a repository interface you could try implementations in a lot of differnt approaches:
Flat File - You could persist the data as XML, and provided that it's not a lot of data you could store the full contents in-memory (just read the file at startup, write the file at shutdown). With in-memory XML you can get very high throughput without concern for database indexes, etc.
Distributable DB - SQLite or SQL Compact work great; they offer many DB benefits, and require no installation
Local DB - SQL Express is a good middle-ground between a lightweight and full-featured DB. Access, when used carefully, can suffice. The main benefit is that it's included with MS Office (although not installed by default), and some IT groups are more comfortable having Access installed on machines than SQL Express.
Full DB - MySql, SQL Server, PostGreSQL, et al.
Given your specific requirements I would advise you towards an XML-based flat file--with the only condition being that you are OK with the memory-usage of the application directly correlating to the size of the file (since your data is text, even with the weight of XML, this would take a lot of entries to become very large).
Here's the pros/cons--listed by your requirements:
Cons
No limit on amount of data entered.
using in-memory XML would mean your application would not scale. It could easily handle a 10MB data-file, 100MB shouldn't be an issue (unless your system is low on RAM), above that you have to seriously question "can I afford this much memory?".
Pros
Single user per application. No concurrent activity or multiple users.
XML can be read into memory and held by the process (AppDomain, really). It's perfectly suited for single-user scenarios where concurrency is a very narrow concern.
Allow user entries/data to be exported to an external file that can be easily shared between applications/users.
XML is perfect for exporting, and also easy to import to Excel, databases, etc...
Allows for user queries to display customers based on different combinations of customer information/job site information.
Linq-to-XML is your friend :D
The data will never be viewed or manipulated outside of the application.
....then holding it entirely in-memory doesn't cause any issues
The program will be running almost always, minimized to the task bar.
so loading the XML at startup, and writing at shutdown will be acceptible (if the file is very large it could take a while)
Startup time is not very important, however I would like the queries to be considerably fast
Reading the XML would be relatively slow at startup; but when it's loaded in-memory it will be hard to beat. Any given DB will require that the DB engine be started, that interop/cross-process/cross-network calls be made, that the results be loaded from disk (if not cached by the engine), etc...
It sounds to me like a database is 100% what you need. It offers both the data storage, data retrieval (including queries) and the ability to export data to a standard format (either direct from the database, or through your application.)
For a light database, I suggest SQLite (pronounced 'SQL Lite' ;) ). You can google for tutorials on how to set it up, and then how to interface with it via your C# code. I also found a reference to this C# wrapper for SQLite, which may be able to do much of the work for you!
How about SQLite? It sounds like it is a good fit for your application.
You can use System.Data.SQLite as the .NET wrapper.
You can get SQL Server Express for free. I would say the question is not so much why should you use a database, more why shouldn't you? This type of problem is exactly what databases are for, and SQL Server is a very powerful and widely used database, so if you are going to go for some other solution you need to provide a good reason why you wouldn't go with a database.
A database would be a good fit. SQLite is good as others have mentioned.
You could also use a local instance of SQL Server Express to take advantage of improved integration with other pieces of the Microsoft development stack (since you mention C#).
A third option is a document database like Raven which may fit from the sounds of your data.
edit
A fourth option would be to try Lightswitch when the beta comes out in a few days. (8-23-2010)
/edit
There is always going to be a limitation on data storage (the empty space of the hard disk). According to wikipedia, SQL Express is limited to 10 GB for SQL Server Express 2008 R2
I have developed an network application that is in use in my company for last few years.
At start it was managing information about users, rights etc.
Over the time it grew with other functionality. It grew to the point that I have tables with, let's say 10-20 columns and even 20,000 - 40,000 records.
I keep hearing that Access in not good for multi-user environments.
Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client.
It happens because there is no database engine on the server side and data filtering is done on the client side.
I would migrate this project to the SQL Server but unfortunately it cannot be done in this case.
I was wondering if there is more reliable solution for me than using Access Database and still stay with a single-file database system.
We have quite huge system using dBase IV.
As far as I know it is fully multiuser database system.
Maybe it will be good to use it instead of Access?
What makes me not sure is the fact that dBase IV is much older than Access 2000.
I am not sure if it would be a good solution.
Maybe there are some other options?
If you're having problems with your Jet/ACE back end with the number of records you mentioned, it sounds like you have schema design problems or an inefficiently-structured application.
As I said in my comment to your original question, Jet does not retrieve full tables. This is a myth propagated by people who don't have a clue what they are talking about. If you have appropriate indexes, only the index pages will be requested from the file server (and then, only those pages needed to satisfy your criteria), and then the only data pages retrieved will be those that have the records that match the criteria in your request.
So, you should look at your indexing if you're seeing full table scans.
You don't mention your user population. If it's over 25 or so, you probably would benefit from upsizing your back end, especially if you're already comfortable with SQL Server.
But the problem you described for such tiny tables indicates a design error somewhere, either in your schema or in your application.
FWIW, I've had Access apps with Jet back ends with 100s of thousands of records in multiple tables, used by a dozen simultaneous users adding and updating records, and response time retrieving individual records and small data sets was nearly instantaneous (except for a few complex operations like checking newly entered records for duplication against existing data -- that's slower because it uses lots of LIKE comparisons and evaluation of expressions for comparison). What you're experiencing, while not an Access front end, is not commensurate with my long experience with Jet databases of all sizes.
You may wish to read this informative thread about Access: Is MS Access (JET) suitable for multiuser access?
For the record this answer is copied/edited from another question I answered.
Aristo,
You CAN use Access as your centralized data store.
It is simply NOT TRUE that access will choke in multi-user scenarios--at least up to 15-20 users.
It IS true that you need a good backup strategy with the Access data file. But last I checked you need a good backup strategy with SQL Server, too. (With the very important caveat that SQL Server can do "hot" backups but not Access.)
So...you CAN use access as your data store. Then if you can get beyond the company politics controlling your network, perhaps then you could begin moving toward upfitting your current application to use SQL Server.
I recently answered another question on how to split your database into two files. Here is the link.
Creating the Front End MDE
Splitting your database file into front end : back end is sort of a key to making it more performant. (Assume, as David Fenton mentioned, that you have a reasonably good design.)
If I may mention one last thing...it is ridiculous that your company won't give you other deployment options. Surely there is someone there with some power who you can get to "imagine life without your application." I am just wondering if you have more power than you might realize.
Seth
The problems you experience with an Access Database shared amongst your users will be the same with any file based database.
A read will pull a lot of data into memory and writes are guarded with some type of file lock. Under your environment it sounds like you are going to have to make the best of what you have.
"Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client. "
Actually no. This is a common misstatement spread by folks who do not understand the nature of how Jet, the database engine inside Access, works. Pulling down all the records, or excessive number of records, happens because you don't have all the fields used in the selection criteria or sorting in the index. We've also found that indexing yes/no aka boolean fields can also make a huge difference in some queries.
What really happens is that Jet brings down the index pages and data pages which are required. While this is a lot more data than a database engine would create this is not the entire table.
I also have clients with 600K and 800K records in various tables and performance is just fine.
We have an Access database application that is used pretty heavily. I have had 23 users on all at the same time before without any issues. As long as they don't access the same record then I don't have any problems.
I do have a couple of forms that are used and updated by several different departments. For instance I have a Quoting form that contains 13 different tabs and 10-20 fields on each tab. Users are typically in a single record for minutes editing and looking for information. To avoid any write conflicts I call the below function any time a field is changed. As long as it is not a new record being entered, then it updates.
Function funSaveTheRecord()
If ([chkNewRecord].value = False And Me.Dirty) Then
'To save the record, turn off the form's Dirty property
Me.Dirty = False
End If
End Function
They way I have everything setup is as follows:
PDC.mdb <-- Front End, Stored on the users machine. Every user has their own copy. Links to tables found in PDC_be.mdb. Contains all forms, reports, queries, macros, and modules. I created a form that I can use to toggle on/off the shift key bipass. Only I have access to it.
PDC_be.mdb <-- Back End, stored on the server. Contains all data. Only form and VBA it contains is to toggle on/off the shift key bipass. Only I have access to it.
Secured.mdw <-- Security file, stored on the server.
Then I put a shortcut on the users desktop that ties the security file to the front end and also provides their login credentials.
This database has been running without error or corruption for over 6 years.
Access is not a flat file database system! It's a relational database system.
You can't use SQL Server Express?
Otherwise, MySQL is a good database.
But if you can't install ANYTHING (you should get into those politics sooner rather than later -- or it WILL be later), just use you existing database system.
Basically with Access, it cannot handle more than 5 people connected at the same time, or it will corrupt on you.