I want to use SQLite database which is on FTP server without downloading it. Is it possible to use this database directly?
No, the FTP protocol is designed to sequentially transfer the entire contents of files. There is no way to perform random reads/writes to a file, which is necessary for SQLite (or any database program) to work.
The connection strings for data providers used by SQLite only supports UNC paths, URL parameters are not supported. You must download the file locally.
I do not now how SQLite works, but if you have your database-files on the server, you can mount the the filesystem over ftp and run a local SQLite-server which talks to the mounted files.
According to how the ftp-protocol is designed; If your database is just one file, the system will download the hole file even if you just want the first row, each time the file are needed(if we not use file-cache). If your database is multiple files each file will be downloaded when they are needed. As Pavel Krymets said, it will be slow, so it is not recommended.
Related
How can I make sure that a file uploaded through SFTP (in a Linux base system) stays locked during the transfer so an automated system will not read it?
Is there an option on the client side? Or server side?
SFTP protocol supports locking since version 5. See the specification.
You didn't specify, what SFTP server are you using. So I'm assuming the most widespread one, the OpenSSH. The OpenSSH supports SFTP version 3 only, so it does not support locking.
Anyway, even if your server supported file locking, most SFTP clients/libraries won't support SFTP version 5. Or even if they do, they won't support the locking feature. Note that the lock is explicit, the client has to request it.
There are some common workarounds for the problem:
As suggested by #user1717259, you can have the client upload a "done" file, once an upload finishes. Make your automated system wait for the "done" file to appear.
You can have a dedicated "upload" folder and have the client (atomically) move the uploaded file to a "done" folder. Make your automated system look to the "done" folder only.
Have a file naming convention for files being uploaded (".filepart") and have the client (atomically) rename the file after an upload to its final name. Make your automated system ignore the ".filepart" files.
See (my) article Locking files while uploading / Upload to temporary file name for example of implementing this approach.
Also, some SFTP servers have this functionality built-in. For example ProFTPD with its HiddenStores directive (courtesy of #fakedad).
A gross hack is to periodically check for file attributes (size and time) and consider the upload finished, if the attributes have not changed for some time interval.
You can also make use of the fact that some file formats have clear end-of-the-file marker (like XML or ZIP). So you know, when you download an incomplete file.
A typical way of solving this problem is to upload your real file, and then to upload an empty 'done.txt' file.
The automated system should wait for the appearance of the 'done' file before trying to read the real file.
A simple file locking mechanism for SFTP is to first upload a file to a directory (folder) where the read process isn't looking. You can "make" an alternate folder using the sftp> mkdir command. Upload the file to the alternate directory, instead of the ultimate destination directory. Once the SFTP> put command completes, then do a move like this:
SFTP> move alternate_path/filename destination_path/filename. Since the SFTP "move" is just switching the file pointers, it is atomic, so it is an effective lock.
I'm developing an application using C# 4.0 and SQL Server 2008 R2 Express, my application needs to store and retrieve files (docx, pdf, png) locally and remotely, which approach would be the best?
Store the files in separate database (problem: restricted to 10 GB)
Use a windows shared folder (who to do?)
Use an FTP server (which server and library and how to do?)
SQL Server supports FILESTREAM, so if you have enough control over the SQL Server install to enable that feature then it seems like a good fit for you.
FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.
Files stored directly in the file system with FILESTREAM don't count towards the database size because they aren't stored in the DB.
To confirm with an official source: https://learn.microsoft.com/en-us/sql/relational-databases/blob/filestream-compatibility-with-other-sql-server-features
SQL Server Express supports FILESTREAM. The 10-GB database size limit does not include the FILESTREAM data container.
I want to restore a (not yet existing) database from a .bak file into a remote database server via a C# (.Net 4.0) program.
I know it is not possible via an SQL script to do so, because the .bak file needs to be located on the SQL Server machine. Is it possible via C# though?
Basically i want this:
public bool RestoreDatabase(FileInfo backupFile, string connectionString, string dbName)
{
// Magically Restore Database
// Throw Exception on error (Db already exists, file not found, etc)
}
Before i invest hours of programming and investigation, i want to know if it is technically possible, or if i have to transfer the .bak file to the SQL server.
EDIT:
The context is, that i want to minimize "manual" work that needs to be done in my current project. Currently, when installing the software in a new environment, someone has to manually install the databases by copying the .bak file to the SQL Server, opening SQL Server Manager and restoring the database there. 2 databases are needed, and those MIGHT be placed on 2 different SQL Servers.
My idea was to run 1 programm on ANY system in the same network as the SQL Servers, entering SQL Login credentials, and restoring the databases from the one system, without any network drive. Since this would again mean manual configuration (copy .bak file to network drive, enable SQL server to access network drive [actually then you can just copy the file directly to the SQL server]), i want to avoid it.
EDIT2:
Due to different context-related issues, i cannot export the database as SQL/DACPAC/Snapshot. It has to be .bak sadly.
You asked "Is it possible via C# though?".
The answer is no, it isn't possible via C#.
As #Mithrandir says in the comments, the computer running SQL Server must be able to access the physical backup file somehow. That means the file either needs to be located on that computer somewhere, or it must reside on a file share to which the computer has access.
Another option is to generate sql scripts that create the whole database even with initial data as INSERT statements. You do not need to transfer any BAK file then.
Full edit:
The scenario is that after uploading the file to the server via a secured web service, I'd like to save/create a copy of that file to another server in a LAN or another network.
I'd like to know what possible ways I could use to programmatically copy/create the backup of the file uploaded to the backup server (saving the file to the database would be the last option probably).
Here are a few details:
Files are of different types and sizes mostly text, documents and images that would be around a few KB to a couple of MB's.
Database is SQL Server 2008 R2 and the only way to connect to it is via calls to a secured WCF service.
Servers can be in the same LAN or on separate networks (depends on the client requesting).
The 2nd server is a redundant server and is using the 1st one as it's backup and vice versa.
Took me a while to find this post. Just map the drive to the backup server's shared folder and implement WindowsImpersonationContext.
How to Impersonate a user in managed code?
haven't seen security problems on this and doesn't need to mess with the HTTP/certificates.
I want to write a small c# ftp client class library which basically needs to transfer files to a ftp location.
What I want, is a 100% foolproof code, where I can get some sort of acknowledgement that the ftp file transfer has been either 100% successful or failed.
No resume support is required.
Good to have (but secondary):
Some sort of distributed transaction where only if the file transfer is successful for a file, i update my db for that particular file with 1 (true)... if it is failed, then db to be updated with 0 (false).
But suppose the ftp file transfer was successful, but for whatever reasons the db could not be updated, then the file over ftp should be deleted - I can easily do this using dirty c# code (where i manually try to delete the file if db update failed).
But what I am exactly looking for is file system based transaction over ftp... so both the file transfer as well as db update is not committed until both are successful (hence no need for manual delete).
Any clues?
Having had the "joy" of writing a FTP library myself here is my advice
1) Its NOT gonna be easy, because FTP servers return different return from the same command ( Like directory information,regular ftp commands and pretty much everything).
2) This is gonna take more time then you think
3) The dream about 100% foolproof transfer is not gonna happen, unless you control the FTP server and add a new FTP command so you can compare file hashes.
Pretty much if i was gonna do this again, and my goal was to transfer files ( And not learn from making the library) i would buy a already done library,
.NET has an FTP client you can use. I don't know how robust it is in the face of FTP server quirks; you'll have to test it against your customer's FTP server. As for verifying that the upload was successful, the only tools you have are (1) making sure there was no transport error during the upload, (2) validating the file size when you're done.
The FTP server isn't going to support transactions, so you're going to have to manage that yourself, but this isn't really a complicated scenario. Use a transaction for the DB update; backing out the FTP upload is one call.
try using Ftp with WCF