I use File.Move to move large files around 2GB from a directory to another directory.The desitnation folder is monitored and if there is any new file, it will be uploaded to CDN. But we experienced some partial file upload to CDN which means, the respective file was uploaded to CDN while the same file was moving from source to destination directory. So, I need to know whether File.Move locks the file destination folder till the file is completed moved?
What you can do to avoid partial upload to CDN is to hide it first when moving it and unhide it once it's completely done. And have the monitoring tool not transfer it to the CDN if the file is still hidden.
Or you can lock it out so that other processes (which is your monitoring tool -- CuteFTP) can't access the destination file until the stream is already finished.
e.g.
static void Main(string[] args)
{
string sourcePath = "mytext.txt";
string destPath = #"dest\mytext.txt";
using (FileStream sourceStream = new FileStream(sourcePath, FileMode.Open))
{
using (FileStream destStream = new FileStream(destPath, FileMode.Create))
{
destStream.Lock(0, sourceStream.Length);
sourceStream.CopyTo(destStream);
}
}
if (File.Exists(sourcePath))
{
File.Delete(sourcePath);
}
}
Your problem is the moonitoring on the destination folder.
Since you have a big file, it take time to copy it so what happend is :
You start moving the file
Monitoring system kick in and start uploading to CDN
File was parialy uploaded
You finish moving the file.
One mitigation for this is , assuming your monitoring system searching for files with some extensation - moving MyBigFile.ext to MyBigFile.ext.tmp. after finish , rename it back to MyBigFile.ext, so when monitoring kick in , it will have the complete file
Related
the situation: Files are being dumped in a folder. This folder is being constantly monitored with own logic. When files are in the folder they are being processed automatically. We only want to process files that are fully copied into the directory.
In case we copy a large file e.g. 100MB to a folder, we don't want to process that file until it is fully copied ('complete').
Currently we test this with this code:
FileStream fs = null;
try {
fs = fileInfo.Open(FileMode.Open, FileAccess.Read, FileShare.Read);
// file is 'complete'
} catch (System.Security.SecurityException) {
// file is locked, don't do stuff (maybe Windows Explorer is still copying).
}
catch {}
finally {
fs?.Close();
}
As I think the SO User Hans Passant once said, the only way to test this is to try opening this file.
This code works but is 'old'.
Are there more efficient methods/techniques to implement and test this? (as performance is critical, the faster, the better).
I've had 3 reports now of user's machines crashing while using my software.. the crashes are not related to my program but when they restart the config files my program writes are all corrupt.
There is nothing special to how the files are being written, simply creating a Json representation and dumping it to disk using File.WriteAllText()
// save our contents to the disk
string json = JsonConvert.SerializeObject(objectInfo, Formatting.Indented);
// write the contents
File.WriteAllText(path, json);
I've had a user send me one of the files and the length looks about right (~3kb) but the contents are all 0x00.
According to the post below File.WriteAllText should close the file handle, flushing any unwritten contents to the disk:
In my C# code does the computer wait until output is complete before moving on?
BUT, as pointed out by Alberto in the comments:
System.IO.File.WriteAllText when completes, will flush all the text to
the filesystem cache, then, it will be lazily written to the drive.
So I presume what is happening here is that the file is being cleared and initialized with 0x00 but the data is not yet written when the system crashes.
I was thinking of maybe using some sort of temp file so the process would be like this:
Write new contents to temp file
Delete original file
Rename temp file to original
I don't think that will solve the problem as I presume Windows will just move the file even though the IO is still pending.
Is there any way I can force the machine to dump that data to disk instead of it deciding when to do it or perhaps a better way to update a file?
UPDATE:
Based on suggestions by #usr, #mikez and #llya luzyanin I've created a new WriteAllText function that performs the write using the following logic:
Create a temp file with the new contents using the FileOptions.WriteThrough flag
Writes the data to disk (won't return until the write has completed)
File.Replace to copy the contents of the new temp file to the real file, making a backup
With that logic, if the final file fails to load, my code an check for a backup file and load that instead
Here is the code:
public static void WriteAllTextWithBackup(string path, string contents)
{
// generate a temp filename
var tempPath = Path.GetTempFileName();
// create the backup name
var backup = path + ".backup";
// delete any existing backups
if (File.Exists(backup))
File.Delete(backup);
// get the bytes
var data = Encoding.UTF8.GetBytes(contents);
// write the data to a temp file
using (var tempFile = File.Create(tempPath, 4096, FileOptions.WriteThrough))
tempFile.Write(data, 0, data.Length);
// replace the contents
File.Replace(tempPath, path, backup);
}
You can use FileStream.Flush to force the data to disk. Write to a temp file and use File.Replace to atomically replace the target file.
I believe this is guaranteed to work. File systems give weak guarantees. These guarantees are hardly ever documented and they are complex.
Alternatively, you can use Transactional NTFS if available. It is available for .NET.
FileOptions.WriteThrough can replace Flush but you still need the temp file if your data can exceed a single cluster in size.
Basic Code:
string startPath = #"C:\intel\logs";
string zipPath = #"C:\intel\logs-" + DateTime.Now.ToString("yyyy_dd_M-HH_mm_ss") + ".zip";
ZipFile.CreateFromDirectory(startPath, zipPath);
Error: the process cannot access the file "path_to_the_zip_file_created.zip" because it is being used by another process.
The above setup works fine on windows 7 where I have Visual Studio installed but I get the above error message when running on Windows Server 2008R2.
I have checked the antivirus logs and it does not block the application, nor does it lock the zip file that is created.
//WRONG
ZipFile.CreateFromDirectory("C:\somefolder", "C:\somefolder\somefile.zip");
//RIGHT
ZipFile.CreateFromDirectory("C:\somefolder", "C:\someotherfolder\somefile.zip");
I use to do the same error: zipping a file into the same folder that I'm zipping.
This causes an error, of course.
I came across this while because I was trying to zip the folder where my log files were being actively written by a running application. Kyle Johnson's answer could work, but it adds the overhead of copying the folder and the necessity of cleaning up the copy afterwards. Here's some code that will create the zip even if log files are being written to:
void SafelyCreateZipFromDirectory(string sourceDirectoryName, string zipFilePath)
{
using (FileStream zipToOpen = new FileStream(zipFilePath, FileMode.Create))
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Create))
{
foreach (var file in Directory.GetFiles(sourceDirectoryName))
{
var entryName = Path.GetFileName(file);
var entry = archive.CreateEntry(entryName);
entry.LastWriteTime = File.GetLastWriteTime(file);
using (var fs = new FileStream(file, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (var stream = entry.Open())
{
fs.CopyTo(stream);
}
}
}
}
I had the exact same problem. The workaround is to copy the folder you are zipping to another folder and point CreateFromDirectory there. Don't ask me why this works but it does.
Directory.CreateDirectory(<new directory path>);
File.Copy(<copy contents into new folder>);
ZipFile.CreateFromDirectory(<new folder path>, <zipPath>);
The other answers, provide the correct reason, but I had a little problem in understanding them at the first sight.
If the path of the Zip file that is being created, is the same as the path that is given to the ZipFile.CreateFromDirectory, the ZipFile creates the desired zip file and starts adding the files from the directory to it. And will Eventually, try to add the desired zip file in the zip as well, as it is in the same directory. This is just not possible and not required, because the desired zipfile is being used by CreateFromDirectory method.
If you're getting this error because NLog is locking your log files, you can use the following workaround. Add 'keepFileOpen' attribute to your nlog tag inside NLog.config and set it to false:
<nlog xmlns=.......
keepFileOpen="false"
....>
More details here.
Note that this setting will have negative performance on NLog logging as indicated here.
I have a function that checks every file in a directory and writes a list to the console. The problem is, I don't want it to include files that are currently being copied to the directory, I only want it to show the files that are complete. How do I do that? Here is my code:
foreach (string file in Directory.EnumerateFiles("C:\folder"))
{
Console.WriteLine(file);
}
There's really no way to tell "being copied" vs "locked for writing by something". Relevant: How to check for file lock? and Can I simply 'read' a file that is in use?
If you want to simply display a list of files that are not open for writing, you can do that by attempting to open them:
foreach (string file in Directory.EnumerateFiles("C:\folder"))
{
try {
using (var file = file.Open(file, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) {
Console.WriteLine(file);
}
} catch {
// file is in use
continue;
}
}
However -- lots of caveats.
Immediately after displaying the filename (end of the using block) the file could be opened by something else
The process writing the file may have used FileShare.Read which means the call will succeed, despite it being written to.
I'm not sure what exactly you're up to here, but it sounds like two processes sharing a queue directory: one writing, one reading/processing. The biggest challenge is that writing a file takes time, and so your "reading" process ends up picking it up and trying to read it before the whole file is there, which will fail in some way depending on the sharing mode, how your apps are written, etc.
A common pattern to deal with this situation is to use an atomic file operation like Move:
Do the (slow) write/copy operation to a temporary directory that's on the same file system (very important) as the queue directory
Once complete, do a Move from the temporary directory to the queue directory.
Since move is atomic, the file will either not be there, or it will be 100% there -- there is no opportunity for the "reading" process to ever see the file while it's partially there.
Note that if you do the move across file systems, it will act the same as a copy.
There's no "current files being copied" list stored anywhere in Windows/.NET/whatever. Probably the most you could do is attempt to open each file for append and see if you get an exception. Depending on the size and location of your directory, and on the security setup, that may not be an option.
There isn't a clean way to do this, but this... works...
foreach (var file in new DirectoryInfo(#"C:\Folder").GetFiles())
{
try
{
file.OpenRead();
}
catch
{
continue;
}
Console.WriteLine(file.Name);
}
I have a problem aith multithreading when copying and accessing files.
I have a service, that downloads and unpacks a Zip archive, then it copies a file from unzipped folder to the right location:
//Download, and uzip archive...
//Copy a needed file to its right location
File.Copy(fileName, fileDestination);
Then I start a separate thread, that needs to access the copied files:
TheadPool.QueueUserWorkItem(s => processCopiedFile(fileDestination));
Here's the code fragment from ProcessCopiedFile:
private void ProcessCopiedFile(string filePath)
{
...
//Load the file, previously copied here
var xml = XDocument.Load(filePath);
...
//Do other work...
}
The XDoument.Load call fails with exception:
The process cannot access the file <FileName> because it is used by another process.
Seems like File.Copy keeps the result file locked. When do all work synchronuously, it works without errors.
Have you any thoughts?
Thx.
File.Copy does not keep anything open or locked, it is an atomic operation which requires some time, depending of course on Disk/Network I/O and file size.
Of course while moving from sync to async you should make sure you do not access the destination file while the copy is still in progress.
Copy the file with a stream to avoid windows lock from File.Copy
using(var s = new MemoryStream(File.ReadAllBytes(filePath))
{
using(var fs = new FileStream(newLocation, FileMode.Create))
{
s.WriteTo(fs);
}
}