Embedded Raven DataDirectory on my Dropbox not working 100%? - c#

I have my project files on my Dropbox folder so I can play around with my files at the office as well.
My project contains an EmbeddableDocumentStore with UseEmbeddedHttpServer set to true.
const int ravenPort = 8181;
NonAdminHttp.EnsureCanListenToWhenInNonAdminContext(ravenPort);
var ds = new EmbeddableDocumentStore {
DataDirectory = "Data",
UseEmbeddedHttpServer = true,
Configuration = { Port = ravenPort }
};
Now, this day when I started my project on my office pc I saw this message: Could not open transactional storage: D:\Dropbox\...\Data
Since it's early in my development stage I deleted the data folder on my Dropbox and the project started flawlessly. Now I'm back at home I ran into the same issue! I don't want to end up deleting this folder every time of course.
Can't I store my development data on my Dropbox? Should I bypass something to get this to work?

Set a data directory to a physical disk volume on your local computer. You will not be able to use any sort of mapped drive, network share, UNC path, dropbox or skydrive as a data directory. Just because you have a drive letter does not mean you have a physical disk.
The only types of non-physical storage that even make sense is a LUN attached from a SAN over iSCSI or FibreChannel, or an attached VHD in a virtualized or cloud environment. They will all present as physical disks to the OS.
This would be the case for just about ANY data access environment. Try it with SQL Server if you don't believe me. In RavenDB's case, it is using ESENT as its data store, which requires direct access to the filesystem.
Update
To clarify, even if you are storing on a physical disk, you can't rely on any type of synchronization technology like DropBox or SkyDrive. Why? Because they will be taking a shared read lock on the files to watch for changes. Technologies like ESENT (which RavenDB is based upon) require an exclusive lock to the file.
Other technologies like SQL Server and Windows Virtual Machine also take exclusive locks on their data stores. Why? Because they are constantly reading and writing bits of data in a random-access manner to the file. Would you really want DropBox to be trying to perform an sync operation for every bit of data change? It would be very inefficient and problematic.
Applications that use shared locks don't have this problem. For example, when you work on an MS Word document, it is all being done in memory. When you save the file, DropBox can read the entire file and sync it to the cloud. It can optimize by sending only the bits that have changed, but it still needs to be able to read the file to do so.
So if DropBox has a shared read lock on the ESENT file, then when RavenDB tries to open it exclusively, it gets an error and raises the exception you are seeing.

Related

How to refresh unc path cache?

I've been experiencing the following issue.
I have a database attached to a remote sql server.
All needed impersonation actions are done - in other words I have all needed access to both file system and sql server.
Let's say I have FileStreanDB1 sql database:
\\server\C$\MSSQL\Data\FileStreamDB1.mdf
\\server\C$\MSSQL\Data\FileStreamDB1_log.ldf
\\server\C$\MSSQL\Data\FileStreamDB1
At some point I would like to drop this database.
So I just use the following sql statement (I call this code using c#):
DROP DATABASE [FileStreamDB1]
After that the database is deleted and all files are deleted as well (If I go to that server I don't find them - files and directories are really deleted).
But unfortunatelly the following code says to me that \\server\C$\MSSQL\Data\FileStreamDB1 still exists.
new DirectoryInfo(#"\\server\C$\MSSQL\Data\FileStreamDB1").Exists // returns true
Directory.Exists(#"\\server\C$\MSSQL\Data\FileStreamDB1") // returns true
It looks like that info about the directory cached and I need to clean that cash
(SMB2 Directory Cache and I DO NOT WANT TO DISABLE IT)
I've also tried to do that:
new DirectoryInfo(#"\\server\C$\MSSQL\Data\FileStreamDB1").Refresh().Exists // return true
Any ideas how I can clean windows cache about unc paths using c# ?
Problem is that the remote info is cached for a certain amount of time configurable in the registry.
All of the code will first read from the cache, resulting in the File.Exists=true result until the cache is being invalidated.
I found a couple of ways to bypass this cache from code.
Try to access the server unc $NOCSC$, this will bypass the client-side cache. (note: this doesn't work on Windows Server).
Like so: Directory.Exists(#"\\server$NOCSC$\C$\MSSQL\Data\FileStreamDB1").
Also it appears having an FileSystemWatcher looking at the specified folder will bypass caching as well. (Haven't tried this myself so correct me if I'm wrong)
Note: watching the unc path from Windows Explorer also bypasses the cache but only for that window not for any other code running.
Sources:
https://stackoverflow.com/a/41057871/6058174
https://stackoverflow.com/a/35158172/6058174
http://woshub.com/slow-network-shared-folder-refresh-windows-server/

efficiently pass files from webserver to file server

i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.

Upload having errors on Appharbor. Good on Local

if (Upload.ContentLength > 0)
{
var FileName = Upload.FileName;
var path = Path.Combine(Server.MapPath("~/Content/Images"), FileName);
Upload.SaveAs(path);
cement.ImageLocation = ("/Content/Images/" + FileName);
}
This code is working perfectly in Local IIS hosting. But having problem on AppHarbor. The error that I got from the logging is this.
**Message**
An unhandled exception has occurred.
**Exceptions**
[DirectoryNotFoundException: Could not find a part of the path 'D:\Users\apphb9840cac8716388\app\_PublishedWebsites\RMQGrainsBeta\Content\Images\Capture.PNG'.]
I tried to read some of the articles about this and Got some Information that that the Folder might be protected or something, so I removed the ReadOnly on the Images folder and try to git bash, the problem is. GitBash doesn't recognize the difference and won't push the changes of the folder.
Finally got an answer from AppHarbor. So the thing is, it is not the code that is having the problem. It is the AppHarbor services that is blocking us to upload or directly access the file system. Here is what the Tech from the AppHarbor says.
About storing images: AppHarbor does not allow write access to the
application directory by default. This is because the local filesystem is
ephemeral and may be wiped on each deployment and/or during system
maintenance. For this reason it needs to be manually enabled on the
settings page, and it should only be used for temporary storage purposes such as caching.
I'd recommend using a cloud file storage solution such as Amazon S3. You can
upload files to S3 from your application or the client can upload files
directly to Amazon S3 by using presigned URLs. This will allow you to build
a scalable, distributed file storage feature that is suitable for cloud-based
platforms like AppHarbor. Let me know if you need help
implementing/architecting a solution that suits your needs!
Best,
Rune
Sorry for wasting your time Guys. But thank you for viewing the problem.

Application says network drive doesn't exist, but found using OpenFileDialog

I have made a little app that's running on a Win7-PC. All it does, is to check the content of a network drive at 1:00 O'clock in the morning (and compare it to a folder on its local hard drive), and if thereĀ“s differences, copy the differences to this folder.
The problem is, sometimes it can not find the network drive.
When the app starts up, the network drive is found using a button on the app which starts OpenFileDialog, and the resulting drive letter is put into a textbox beside the button. From that point it should just run by itself. The PC is never turned off.
When it says the network drive can not be found, I can manually press the button on the very same app, select the drive in the OpenFileDialog (the drive letter never changes), and the app will run flawless in a couple of days. Then the problem occurs again.
The question is: Why can the network drive be accessed through the OpenFileDialog on my app, but my app can not?
My app start the copy-process using this function (called with "Y:\") to determine whether the drive is present or not:
public bool fn_drive_exists(string par_string)
{
DirectoryInfo di_dir = new DirectoryInfo(par_string);
if (di_dir.Exists)
{
return true;
}
return false;
}
...and sometimes it returns a False, until I "wake it up" using the OpenFileDialog.
What does OpenFileDialog do, that my app do not?
According to this SO post, the problem should be gone if you use UNC path instead of mapped network drive.
If your destination has a static ip address, I suggest you use that ip address instead of domain name for network drive
This SO post describes a similar scenario to what you've described.
One of the links posted as a response to that question led me to this MSDN article which provides a variety of reasons as to why one might encounter errors when trying to access shared network drives by using a mapped drive letter.
Microsoft's suggestion (see below) is to simply use a UNC path.
A service (or any process running in a different security context) that must access a remote resource should use the Universal Naming Convention (UNC) name to access the resource.
To answer your actual question more specifically, with regards to why it suddenly can't access the network share, I would venture a guess to say that the network share is being disconnected by Windows due to an idle timeout, as discussed in KB297684. Any attempt to access the disconnected drive will be met with a small wait as the connection to the network share is re-established, which could presumably be what is causing your issue.
To test this theory, try writing some data to a file on the network drive at a relatively short interval (every 10 minutes, perhaps?) to try and convince Windows that the drive is still active.
You can also try to use:
System.IO.Directory.Exists(par_string);
instead of writing Your own method for the same thing. I would expect a framework method to be able to "wake" the network drive.
Note: Method also works for UNC paths (something like \\<server name or IP address>\<shared folder>)
Like Harvey says, use the UNC path to access the folder, for instance \\server\sharedfolder. In place of \\server use the name of the server. Your computer has a name and so does the server. You can also use the IP address if you know it. You replace \sharedfolder with the path to the files. Some examples:
\\AppsServer\c$\Program Files(x86)
\\FileServer1\d$\Users\John\My Documents
The c$ represents that the C drive is the shared folder. If the entire drive is not shared, you will need to share the specific folder. You can do that by logging onto the server, right clicking the folder, and selecting Properties. Then you go to the Sharing tab and check the Share this folder checkbox. If your shared folder is called MyShare, then your UNC path to access the folder will be
\\server\MyShare

How To Do a Server To Server File Transfer without any user interaction?

In my scenario, users are able to upload zip files to a.example.com
I would love to create a "daemon" which in specified time intervals will move-transfer any zip files uploaded by the users from a.example.com to b.example.com
From the info i gathered so far,
The daemon will be an .ashx generic handler.
The daemon will be triggered at the specified time intervals via a plesk cron job
The daemon (thanks to SLaks) will consist of two FtpWebRequest's (One for reading and one for writing).
So the question is how could i implement step 3?
Do i have to read into to a memory() array the whole file and try to write that in b.example.com ?
How could i write the info i read to b.example.com?
Could i perform reading and writing of the file at the same time?
No i am not asking for the full code, i just can figure out, how could i perform reading and writing on the fly, without user interaction.
I mean i could download the file locally from a.example.com and upload it at b.example.com but that is not the point.
Here is another solution:
Let ASP.Net in server A receive the file as a regular file upload and store it in directory XXX
Have a windows service in server A that checks directory XXX for new files.
Let the window service upload the file to server B using HttpWebRequest
Let server B receive the file using a regular ASP.Net file upload page.
Links:
File upload example (ASP.Net): http://msdn.microsoft.com/en-us/library/aa479405.aspx
Building a windows service: http://www.codeproject.com/KB/system/WindowsService.aspx
Uploading files using HttpWebRequest: Upload files with HTTPWebrequest (multipart/form-data)
Problems you gotto solve:
How to determine which files to upload to server B. I would use Directory.GetFiles in a Timer to find new files instead of using a FileSystemWatcher. You need to be able to check if a file have been uploaded previously (delete it, rename it, check DB or whatever suits your needs).
Authentication on server B, so that only you can upload files to it.
To answer your questions - yes you can read and write the files at the same time.
You can open an FTPWebRequest to ServerA and a FTPWebRequest to ServerB. On the FTPWebRequest to serverA you would request the file, and get the ResponseStream. Once you have the ResponseStream, you would read a chunk of bytes at a time, and write that chunck of bytes to the serverB RequestStream.
The only memory you would be using would be the byte[] buffer in your read/write loop. Just keep in mind though that the underlying implementation of FTPWebRequest will download the complete FTP file before returning the response stream.
Similarly, you cannot send your FTPWebRequest to upload the new file until all bytes have been written. In effect, the operations will happen synchronously. You will call GetResponse which won't return until the full file is available, and only then can you 'upload' the new file.
References:
FTPWebRequest
Something you have to take into consideration is that a long running web requests (your .ashx generic handler) may be killed when the AppDomain refreshes. Therefore you have to implement some sort of atomic transaction logic in your code, and you should handle sudden disconnects and incomplete FTP transfers if you go that way.
Did you have a look at Windows Azure before? This cloud platform supports distributed file system, and has built-in atomic transactions. Plus it scales nicely, should your service grow fast.
I would make it pretty simple. The client program uploads the file to server A. This can be done very easily in C# with an FtpWebRequest.
http://msdn.microsoft.com/en-us/library/ms229715.aspx
I would then have a service on server A that monitors the directory where files are uploaded. When a file is uploaded to that directory or on certain intervals it simply copies files over to server B. Again this can be done via Ftp or other means if they're on the same network.
you need some listener on the target domain, ftp server running there, and on the client side you will use System.Net.WebClient and UploadFile or UploadFileAsync to send the file. is that what you are asking?
It sounds like you don't really need a webservice or handler. All you need is a program that will, at regular intervals, open up an FTP connection to the other server and move the files. This can be done by any .NET program with the System.WebClient library, doesn't have to be a "web app". This other program could be a service, which could handle its own timing, or a simple app run by your cron job. If you need this to go two ways, for instance if the two servers are mirrors, you simply have the same app on the second box doing the same thing to upload files over to the first.
If both machines are in the same domain, couldn't you just do file replication at the OS level?
DFS
set up keys if you are using linux based systems:
http://compdottech.blogspot.com/2007/10/unix-login-without-password-setting.html
Once you have the keys working, you can copy the file from system A to system B by writing regular shell scripts that would not need any user interactions.

Categories

Resources