I like to use System.IO.File.WriteAllBytes() to keep things simple. But it seems, that this method can not be used everywhere. To write on my local system it works fine.
But when I use System.IO.File.WriteAllBytes() to write on a Windows share it produces an empty file and fails with an Exception:
System.UnauthorizedAccessException: Access to the path '/var/windowsshare/file.bin' is denied.
---> System.IO.IOException: Permission denied
If I look at the source at https://github.com/dotnet/runtime/blob/c72b54243ade2e1118ab24476220a2eba6057466/src/libraries/System.IO.FileSystem/src/System/IO/File.cs#L421
I found the following code working under the hood:
using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read))
{
fs.Write(bytes, 0, bytes.Length);
}
If I change the code and use FileShare.None instead of FileShare.Read it works. So I have a workaround and I have to keep in mind that System.IO.File.WriteAllBytes() is not waterproof (is it correct?).
Unfortunately, my analysis ended up with a few related questions:
So what is the best practice if the target path is configurable? Does the developer have to avoid System.IO.File.WriteAllBytes() or does the system administrator have to find another way to mount the share?
What is wrong with FileShare.Read? Does the Windows share change permissions/locking while System.IO.File.WriteAllBytes() is writing?
Are there some tips to mount the Windows share?
Update 1
WriteAllBytes():
// WriteAllBytes() Throws System.UnauthorizedAccessException
System.IO.File.WriteAllBytes("/var/windowsshare/file.bin", bytes);
Create and move with C#
// Create local and move + optional overwrite works!
var tmp = Path.GetTempFileName(); // local file
System.IO.File.WriteAllBytes(tmp, bytes); // write local
System.IO.File.Move(tmp, "/var/windowsshare/file.bin", true); // optional overwrite
ls:
# ls -l /var/windowsshare/file.bin
-rw-rw-rw-. 1 apache apache 20 Feb 9 11:43 /var/windowsshare/file.bin
# ls -Z /var/windowsshare/file.bin
system_u:object_r:cifs_t:s0 /var/windowsshare/file.bin
mount ...
# mount -l
//1.2.3.4/windowsshare on /var/windowsshare type cifs (rw,relatime,vers=3.1.1,cache=strict,username=luke,domain=dom,uid=48,forceuid,gid=48,forcegid,addr=1.2.3.4,file_mode=0666,dir_mode=0777,soft,nounix,nodfs,nouser_xattr,mapposix,noperm,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,_netdev)
# stat -f -c %T /var/windowsshare/file.bin
smb2
The following thread (https://github.com/dotnet/runtime/issues/42790) on Github helped me out. In the end I remounted my CIFS shares with the nobrl option.
In the thread they also came to the conclusion that using FileShare.None works, but the root cause seems to be that the CIFS server we are using does not support byte range locks.
I am not sure what all the implications of this is, but in my case there is no need to write the file more than once and there should be no two processes trying to write to the same file.
Related
An internal application in our company requires files aa, bb ... zz in folder X to be read upon startup. When I (as someone with FullControl access to folder X) launch the app, all goes fine. When any of my colleagues (who only have Read access to folder X) launch the app, they get an "Access denied to file aa ... " exception.
The files are being read by the following routine
public static void readFromBinaryFile(this QIHasFileIo xThis, string xFilePath)
{
if (!System.IO.File.Exists(xFilePath))
throw new System.Exception("File to read " + xFilePath + " does not exist ... ");
if (xThis == null)
throw new NullReferenceException("xThis cannot be null, as it is a readonly reference ... ");
using (BinaryReader xReader = new BinaryReader(new FileStream(xFilePath, FileMode.Open, FileAccess.Read)))
xThis.readObject(xReader);
}
i.e. I am specifying the Read mode, which should in turn require only Read access to the folder. When my colleagues go to folder X in Explorer, then can copy the aa, bb, ... files to their Desktops, which means they DO have Read access to the files.
So I am intrigued. This weird behaviour started with changes to the data server a couple days ago. The most notable changes were that 1/ my colleagues stopped having admin rights on the data server 2/ some GPO's might have been messed up (it happenned before in the company). The IT department is baffled as well, so I have no clue how to proceed.
Any hint is much appreciated,
Daniel
Edit: An already deleted post proposed using FileShare.ReadWrite. I am grateful to the author for the comment, however the file is guaranteed not to have a write-lock on it. Hence, the why File.copy works but File.OpenRead prompts access denied? thread is not relevant here.
You need to add FileShare.ReadWrite to the parameters passed to the FileStream constructor.
This prevents the application trying to get an exclusive read lock, which might not be possible under some conditions where a shared read-write lock is possible (such as the file being left open for writing by another process).
I had similar problem while reading a file. The issue was with the level of access for the ActiveDirectory group for the particular group (readers group) of users was not setup correctly.
I am not sure if you are using AD group authentication on the server. I would recommend you to check the type of access and groups your colleagues have. Also, you need to check how your application is currently authenticating the users to access the directory.
I try to copy a file that is included in the app bundle as a resource to the temp folder on the iPhone. While this works on the Simulator, on the device I get an exception:
System.UnauthorizedAccessException: Access to the path
"/private/var/mobile/Applications/B763C127-9882-4F76-8860-204AFEA8DD68/Client_iOS.app/testbundle.zip"
is denied.
The code I use is below It cannot open the source file.
using(var sourceStream = File.Open("./demobundle.zip", FileMode.Open))
{
sourceStream.CopyTo(targetStream);
}
What is the correct way of copying a file into a destination stream?
Why is it that I always find the answers to my questions practically immediately after I asked here? :-)
One has to specify the file access mode. If it is set to Read, it'll work. The default seems to be some write mode and that is obviously not possible.
using(var sourceStream = File.Open("./demobundle.zip", FileMode.Open, FileAccess.Read))
{
sourceStream.CopyTo(targetStream);
}
At first I thought I'm facing a very simple task. But now I realized it doesn't work as I imagined, so now I hope you people can help me out, because I'm pretty much stuck at the moment.
My scenario is this (on a Windows 2008 R2 Server):
A file gets uploaded 3 times per day to a FTP directory. The filename is always the same, which means the existing file gets overwritten every time.
I have programed a simple C# service which is watching the FTP upload directory, I'm using the FileSystemWatcher class for this.
The upload of the file takes a few minutes, so once the File Watcher registers a change, I'm periodically trying to open the file, to see if the file is still being uploaded (or locked)
Once the file isn't locked anymore, I try to move the file over to my IIS Virtual Directory. I have to delete the old file first, and then move the new file over. This is where my problem starts. The file seems to be always locked by IIS (the w3wp.exe process).
After some research, I found out that I have to kill the process which is locking the file (w3wp.exe in this case). In order to do this, I have created a new application pool and converted the virtual directory into an application. Now my directory is running under a seperate w3wp.exe process, which I supposedly can safely kill and move the new file over there.
Now I just need to find the proper w3wp.exe process (there are 3 w3wp.exe processes running in total, each running under a seperate application pool) which has the lock on my target file. But this seems to be an almost impossible task in C#. I found many questions here on SO regarding "Finding process which locked a specific file", but none of the answers helped me.
Process Explorer for example is exactly telling me which process is locking my file.
The next thing I don't understand is, that I can delete the target file through Windows Explorer without any problem. Just my C# application gets the "File is being used by another process" error. I wonder what's the difference here...
Here are the most notable questions on SO regarding locked files and C#:
Win32: How to get the process/thread that owns a mutex?
^^
The example code here does actually work, but this outputs the open handle IDs for every active process. I just can't figure out how to search for a specific filename, or at least resolve the handle ID to a filename. This WinAPI stuff is way above my head.
Using C#, how does one figure out what process locked a file?
^^
The example code here is exactly what I need, but unfortunately I can't get it to work. It is always throwing an "AccessViolationException" which I can't figure out, since the sample code is making extensive use of WinAPI calls.
Simple task, impossible to do? I appreciate any help.
EDIT
Here are some relevant parts of my server code:
Helper function to detect if a file is locked:
private bool FileReadable(string file, int timeOutSeconds)
{
DateTime timeOut = DateTime.Now.AddSeconds(timeOutSeconds);
while (DateTime.Now < timeOut)
{
try
{
if (File.Exists(file))
{
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read, FileShare.None))
{
return true;
}
}
return false;
}
catch (Exception)
{
Thread.Sleep(500);
}
}
m_log.LogLogic(0, "FileReadable", "Timeout after [{0}] seconds trying to open the file {1}", timeOutSeconds, file);
return false;
}
And this is the code in my FileSystemWatcher event, which is monitoring the FTP upload directory. filepath is the newly uploaded file, targetfilepath is the target file in my IIS directory.
// here I'm waiting for the newly uploaded file to be ready
if (FileReadable(filepath, FWConfig.TimeOut))
{
// move uploaded file to IIS virtual directory
string targetfilepath = Path.Combine(FWConfig.TargetPath, FWConfig.TargetFileName);
if(File.Exists(targetfilepath))
{
m_log.LogLogic(4, "ProcessFile", "Trying to delete old file first: [{0}]", targetfilepath);
// targetfilepath is the full path to my file in my IIS directory
// always fails because file is always locked my w3wp.exe :-(
if(FileReadable(targetfilepath, FWConfig.TimeOut))
File.Delete(targetfilepath);
}
File.Move(filepath, targetfilepath);
}
EDIT2:
Killing the w3wp.exe process while clients are downloading the file would be no problem for us. I'm just having a hard time finding the right w3wp.exe process which is locking the file.
Also, my client application, which is downloading the file on the clients, is checking the HTTP HEAD for the Last-Modified date. The client is checking the date every 10 minutes. So it is possible that the file is being locked by IIS because there are clients continously checking the HTTP HEAD for the file. Nonetheless, I don't understand why I can manually delete/rename/move the file through windows explorer without any problems. Why does this work, and why does my application get a "Locked by another process" exception?
One problem I've run into is that a file exists while it is still being written, which means it would be locked as well. If your FileReadable() function were called at this time, it would return false.
My solution was to, in the proc which writes the file, write the file to, say, OUTPUT1.TXT, and then after it is fully written and the FileStream closed, rename it to OUTPUT2.TXT. This way, the existence of OUTPUT2.TXT indicates that the file is written and (hopefully) unlocked. Simply check for OUTPUT2.TXT in your FileReadable() loop.
Everybody say...
"Do it a better way"
Nobody say how!!!
Here's how. Because you mentioned 'My Client Application,' there is a key opportunity here that you would not have if you didn't have control over the apps reading the file.
Just use new filenames each time.
You have control of the program reading and writing the files. Put an incrementing # in the filesnames, have the client pick the biggest # (Actually the latest date, then your numbers can wrap around). Have the writer program clean up old files if it can; if not, they won't hurt anything. IIS will eventually let go of them. If not, just open up explorer every week and do it yourself!
Other keys that make this work are the low frequency of updates (files won't build up too bad), and the fact that the FTP+webserver are on the same drive (Otherwise the MOVE is not atomic and clients could get a half-copied file. Solution if FTP drive is different would be to copy to a temp drive on the webserver then move).
but what if you can't change the client or it has to read just one name?
Front-end it with a script. Have the client hit an ASPX that sets the right HTTP headers and has the 'pick the right file' logic, and spits out the file contents. This is a very popular trick pages use to write images stored on a database out to the browser, while the img tag appears to read from a file. (google along that lines for sample code).
sounds like a hack, it's not. Modern lockless memory cache systems do a similar thing. It is impossible for a lock or corruption to occur; until the 'write' is complete, readers see the old version.
plus, it's simple, everybody from a script kiddie to a punchcard vetern will know exactly what you're up to. Go low-tech!
You're troubleshooting a symptom of the problem not a fix for the root cause. If you want to go down that path here is the code to kill processes http://www.codeproject.com/Articles/20284/My-TaskManager - but the better idea would be to do it properly and work out whats wrong. I suggest in the Catch Exception of FileReadable:
catch (Exception ex) {
if (ex is IOException && IsFileLocked(ex)) {
//Confirm the code see's it as a FileLocked issue, not some other exception
//its not safe to unlock files used by other processes, because the other process is likely reading/writing it.
}
}
private static bool IsFileLocked(Exception exception)
{
int errorCode = Marshal.GetHRForException(exception) & ((1 << 16) - 1);
return errorCode == 32 || errorCode == 33;
}
Turn off any Anti-Virus software and re-test
Increase the polling timeout duration to see if its just a timing thing
Check the FTP logfile and see the status for the disconnected client and compare the status code with the ones here.
I don't see in your sample code where you are closing your file stream. Keeping the file stream open will keep a lock on the file. It would be a good idea to close the stream. You probably don't want to be killing your w3wp.exe process, as others here have mentioned.
restarting IIS can unlock the file taken by w3wp.exe.
cmd (run as administrator) -> iisreset /stop -> update/delete file in
windows explorer -> iisreset /start
hi i am working on c# project and i try to lock a file from being opened , copied or even deleted by using that code :
FileInfo fi = new FileInfo(textBox1.Text);
FileSecurity ds = fi.GetAccessControl();
ds.AddAccessRule(new FileSystemAccessRule("Authenticated Users", FileSystemRights.FullControl, AccessControlType.Deny));
fi.SetAccessControl(ds);
but when i open the file , it is opened and can be deleted , is there anything wrong on my code ?
by the way , that code works perfectly on anywhere but flash drive , i can block editing or copying files from computer , but on flash drive the application is useless .
What filesystem does your flash drive have? I'm guessing FAT32, rather than NTFS.
FAT32 has no concept of per-file ACLs (or as far as I know, no concept of ACLs whatsoever).
See this article:
http://technet.microsoft.com/en-us/library/cc783530(WS.10).aspx
On a FAT or FAT32 volume, you can set permissions for shared folders but not for files and folders within a shared folder. Moreover, share permissions on a FAT or FAT32 volume restrict network access only, not access by users working directly on the computer.
The only option will be to open the file in exclusive access mode to prevent others from changing it while you are reading it.
See this question (stolen from Vitaliy's comment):
How to lock file
The code from the accepted answer:
using (FileStream fs =
File.Open("MyFile.txt", FileMode.Open, FileAccess.Read, FileShare.None))
{
// use fs
}
I'm quite new to C# and I need to write a file (grub) on an EXt2 linux partition from windows 7.
What is the good way to do such thing? Do I need to mount the partition with external program?
I think you need to mount it with an external program such as: http://www.fs-driver.org/
Mount the drive using a a driver like FS-driver and then write to it using standard C# file writing techniques.
You can use Ext2Fsd to mount the partition in windows, and then write to as you would any other partition.
EXT2FSD Home Page
If you use Total Commander under Windows, one of the available plugins is for accessing ext2/3/4. It's probably already been mentioned here but it makes accessing Linux from Windows almost transparent. I don't have it installed right now or I'd look at the name.
SharpExt4 may help you with Linux file system read and write.
A .Net library to provide full access (read/write) to Linux ext2/ext3/ext4 filesystem
Here is the GitHub link
https://github.com/nickdu088/SharpExt4
//Open EXT4 SD Card
//Here is SD Card physical disk number. you can get from Windows disk manager
ExtDisk SharpExt4.ExtDisk.Open(int DiskNumber);
//In your case FAT32 is 1st one, ext4 is 2nd one
//Open EXT4 partition
var fs = ExtFileSystem.Open(disk.Parititions[1]);
//Create /home/pi/file.conf file for write
var file = fs.OpenFile("/home/pi/file.conf", FileMode.Create, FileAccess.Write);
var hello = "Hello World";
var buf = Encoding.ASCII.GetBytes(hello);
//Write to file
file.Write(buf, 0, buf.Length);
file.Close();
How to use SharpExt4 to access Raspberry Pi SD Card Linux partition