C# Read/Write file from multiple applications - c#

I have scenario where we are maintaining Rates file(.xml) which is access by 3 different application running on 3 different servers. All 3 application uses RatesMaintenance.dll which has below 4 methods Load, Write, Read and Close.
All 3 applications writes into file continuously and hence I have added Monitor.Enter and Monitor.Exit mechanism assuming that these 3 operation from 3 different application will not collide. But at this moment, in some case, I am getting error - "Could not open the rates file"
As per my understanding, this means, some reason 3 application tries to access at same. Could anyone please suggest how to handle such scenario?
Monitor.Enter(RatesFileLock);
try
{
//Open Rates file
LoadRatesFile(false);
//Write Rates into file
WriteRatesToFile();
//Close Rates file
CloseRatesFile();
}
finally
{
Monitor.Exit(RatesFileLock);
}
Method signature of Load-
LoadRatesFile(bool isReadOnly)
For opening file-
new FileStream(RatesFilePath,
isReadOnly ? FileMode.Open : FileMode.OpenOrCreate,
isReadOnly ? FileAccess.Read : FileAccess.ReadWrite,
isReadOnly ? FileShare.ReadWrite : FileShare.None);
.... remaining Rates reading logic code here
For reading Rates from file-
Rates = LoadRatesFile(true);
For Writing Rates into file-
if (_RatesFileStream != null && _RatesInfo != null && _RatesFileSerializer != null)
{
_RatesFileStream.SetLength(0);
_RatesFileSerializer.Serialize(_RatesFileStream, _RatesInfo);
}
In closing file method-
_RatesFileStream.Close();
_RatesFileStream = null;
I hope, I try to explain my scenario in details. Please let me know in case anyone more details.

While the other answers are correct and that you won't be able to get a perfect solution with files that are accessed concurrently by multiple processes, adding a retry mechanism may make it reliable enough for your use case.
Before I show one way to do that, I've got two minor suggestions - C#'s "using" blocks are really useful for dealing with resources such as files and locks that you really want to be sure to dispose of after use. In your code, the monitor is always exited because you use try..finally (though this would still be clearer with an outer "lock" block) but you don't close the file if the WriteRatesToFile method fails.
So, firstly, I'd suggest changing your code to something like the following -
private static object _ratesFileLock = new object();
public void UpdateRates()
{
lock (_ratesFileLock)
{
using (var stream = GetRatesFileStream())
{
var rates = LoadRatesFile(stream);
// Apply any other update logic here
WriteRatesToFile(rates, stream);
}
}
}
private Stream GetRatesFileStream()
{
return File.Open("rates.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite);
}
private IEnumerable<Rate> LoadRatesFile(Stream stream)
{
// Apply any other logic here
return RatesSerialiser.Deserialise(stream);
}
private void WriteRatesToFile(IEnumerable<Rate> rates, Stream stream)
{
RatesSerialiser.Serialise(rates, stream);
}
This tries to opens the file stream once and then reuses it between load and write actions - and reliably dispose of it, even if an error is encountered inside in the using block (same applies to the "lock" block, which is simpler than Monitor.Enter/Exit and try..finally).
This could quite simply be extended to include a retry mechanism so that if the file is locked by another process then we wait a short time and then try again -
private static object _ratesFileLock = new object();
public void UpdateRates()
{
Attempt(TryToUpdateRates, maximumNumberOfAttempts: 50, timeToWaitBetweenRetriesInMs: 100);
}
private void TryToUpdateRates()
{
lock (_ratesFileLock)
{
using (var stream = GetRatesFileStream())
{
var rates = LoadRatesFile(stream);
// Apply any other update logic here
WriteRatesToFile(rates, stream);
}
}
}
private Stream GetRatesFileStream()
{
return File.Open("rates.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite);
}
private IEnumerable<Rate> LoadRatesFile(Stream stream)
{
// Apply any other logic here
return RatesSerialiser.Deserialise(stream);
}
private void WriteRatesToFile(IEnumerable<Rate> rates, Stream stream)
{
RatesSerialiser.Serialise(rates, stream);
}
private static void Attempt(Action work, int maximumNumberOfAttempts, int timeToWaitBetweenRetriesInMs)
{
var numberOfFailedAttempts = 0;
while (true)
{
try
{
work();
return;
}
catch
{
numberOfFailedAttempts++;
if (numberOfFailedAttempts >= maximumNumberOfAttempts)
throw;
Thread.Sleep(timeToWaitBetweenRetriesInMs);
}
}
}

What you're trying to do is difficult bordering on impossible. I won't say that it's impossible because there is always a way, but it's better not to try to make something work in a way it wasn't intended to.
And even if you get it to work and you could ensure that applications on multiple servers don't overstep each other, someone could write some other process that locks the same file because it doesn't know about the system in place for gaining access to that file and playing well with the other apps.
You could check to see if the file is in use before opening it, but there's no guarantee that another server won't open it in between when you checked and when you tried to open it.
The ideal answer is not to try to use a file as database accessed by multiple applications concurrently. That's exactly what databases are for. They can handle multiple concurrent requests to read and write records. Sometimes we use files for logs or other data. But if you've got applications going on three servers then you really need a database.

Related

C# Exception, file being use by another process [duplicate]

Writing Stringbuilder to file asynchronously. This code takes control of a file, writes a stream to it and releases it. It deals with requests from asynchronous operations, which may come in at any time.
The FilePath is set per class instance (so the lock Object is per instance), but there is potential for conflict since these classes may share FilePaths. That sort of conflict, as well as all other types from outside the class instance, would be dealt with retries.
Is this code suitable for its purpose? Is there a better way to handle this that means less (or no) reliance on the catch and retry mechanic?
Also how do I avoid catching exceptions that have occurred for other reasons.
public string Filepath { get; set; }
private Object locker = new Object();
public async Task WriteToFile(StringBuilder text)
{
int timeOut = 100;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
while (true)
{
try
{
//Wait for resource to be free
lock (locker)
{
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
}
break;
}
catch
{
//File not available, conflict with other class instances or application
}
if (stopwatch.ElapsedMilliseconds > timeOut)
{
//Give up.
break;
}
//Wait and Retry
await Task.Delay(5);
}
stopwatch.Stop();
}
How you approach this is going to depend a lot on how frequently you're writing. If you're writing a relatively small amount of text fairly infrequently, then just use a static lock and be done with it. That might be your best bet in any case because the disk drive can only satisfy one request at a time. Assuming that all of your output files are on the same drive (perhaps not a fair assumption, but bear with me), there's not going to be much difference between locking at the application level and the lock that's done at the OS level.
So if you declare locker as:
static object locker = new object();
You'll be assured that there are no conflicts with other threads in your program.
If you want this thing to be bulletproof (or at least reasonably so), you can't get away from catching exceptions. Bad things can happen. You must handle exceptions in some way. What you do in the face of error is something else entirely. You'll probably want to retry a few times if the file is locked. If you get a bad path or filename error or disk full or any of a number of other errors, you probably want to kill the program. Again, that's up to you. But you can't avoid exception handling unless you're okay with the program crashing on error.
By the way, you can replace all of this code:
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
With a single call:
File.AppendAllText(Filepath, text.ToString());
Assuming you're using .NET 4.0 or later. See File.AppendAllText.
One other way you could handle this is to have the threads write their messages to a queue, and have a dedicated thread that services that queue. You'd have a BlockingCollection of messages and associated file paths. For example:
class LogMessage
{
public string Filepath { get; set; }
public string Text { get; set; }
}
BlockingCollection<LogMessage> _logMessages = new BlockingCollection<LogMessage>();
Your threads write data to that queue:
_logMessages.Add(new LogMessage("foo.log", "this is a test"));
You start a long-running background task that does nothing but service that queue:
foreach (var msg in _logMessages.GetConsumingEnumerable())
{
// of course you'll want your exception handling in here
File.AppendAllText(msg.Filepath, msg.Text);
}
Your potential risk here is that threads create messages too fast, causing the queue to grow without bound because the consumer can't keep up. Whether that's a real risk in your application is something only you can say. If you think it might be a risk, you can put a maximum size (number of entries) on the queue so that if the queue size exceeds that value, producers will wait until there is room in the queue before they can add.
You could also use ReaderWriterLock, it is considered to be more 'appropriate' way to control thread safety when dealing with read write operations...
To debug my web apps (when remote debug fails) I use following ('debug.txt' end up in \bin folder on the server):
public static class LoggingExtensions
{
static ReaderWriterLock locker = new ReaderWriterLock();
public static void WriteDebug(string text)
{
try
{
locker.AcquireWriterLock(int.MaxValue);
System.IO.File.AppendAllLines(Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Replace("file:\\", ""), "debug.txt"), new[] { text });
}
finally
{
locker.ReleaseWriterLock();
}
}
}
Hope this saves you some time.

implementing a c# read write lock where some reads produce writes

I have to implement some .Net code involving a shared resource accessed by different threads. In principle, this should be solved with a simple read write lock. However, my solution requires that some of the read accessions do end up producing a write operation. I first checked the ReaderWriterLockSlim, but by itself it does not solve the problem, because it requires that I know in advance if a read operation can turn into a write operation, and this is not my case. I finally opted by simply using a ReaderWriterLockSlim, and when the read operation "detects" that needs to do a write operation, release the read lock and acquire a write lock. I am not sure if there is a better solution, or event if this solution could lead to some synchronization issue (I have experience with Java, but I am fairly new to .Net).
Below some sample code illustrating my solution:
public class MyClass
{
private int[] data;
private readonly ReaderWriterLockSlim syncLock = new ReaderWriterLockSlim();
public void modifyData()
{
try
{
syncLock.EnterWriteLock();
// clear my array and read from database...
}
finally
{
syncLock.ExitWriteLock();
}
}
public int readData(int index)
{
try
{
syncLock.EnterReadLock();
// some initial preprocessing of the arguments
try
{
_syncLock.ExitReadLock();
_syncLock.EnterWriteLock();
// check if a write is needed <--- this operation is fast, and, in most cases, the result will be false
// if true, perform the write operation
}
finally
{
_syncLock.ExitWriteLock();
_syncLock.EnterReadLock();
}
return data[index];
}
finally
{
syncLock.ExitReadLock();
}
}
}

Tell if a file is written to a networkshare completely [duplicate]

When a file is created (FileSystemWatcher_Created) in one directory I copy it to another. But When I create a big (>10MB) file it fails to copy the file, because it starts copying already, when the file is not yet finished creating...
This causes Cannot copy the file, because it's used by another process to be raised. ;(
Any help?
class Program
{
static void Main(string[] args)
{
string path = #"D:\levan\FolderListenerTest\ListenedFolder";
FileSystemWatcher listener;
listener = new FileSystemWatcher(path);
listener.Created += new FileSystemEventHandler(listener_Created);
listener.EnableRaisingEvents = true;
while (Console.ReadLine() != "exit") ;
}
public static void listener_Created(object sender, FileSystemEventArgs e)
{
Console.WriteLine
(
"File Created:\n"
+ "ChangeType: " + e.ChangeType
+ "\nName: " + e.Name
+ "\nFullPath: " + e.FullPath
);
File.Copy(e.FullPath, #"D:\levan\FolderListenerTest\CopiedFilesFolder\" + e.Name);
Console.Read();
}
}
There is only workaround for the issue you are facing.
Check whether file id in process before starting the process of copy. You can call the following function until you get the False value.
1st Method, copied directly from this answer:
private bool IsFileLocked(FileInfo file)
{
FileStream stream = null;
try
{
stream = file.Open(FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch (IOException)
{
//the file is unavailable because it is:
//still being written to
//or being processed by another thread
//or does not exist (has already been processed)
return true;
}
finally
{
if (stream != null)
stream.Close();
}
//file is not locked
return false;
}
2nd Method:
const int ERROR_SHARING_VIOLATION = 32;
const int ERROR_LOCK_VIOLATION = 33;
private bool IsFileLocked(string file)
{
//check that problem is not in destination file
if (File.Exists(file) == true)
{
FileStream stream = null;
try
{
stream = File.Open(file, FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch (Exception ex2)
{
//_log.WriteLog(ex2, "Error in checking whether file is locked " + file);
int errorCode = Marshal.GetHRForException(ex2) & ((1 << 16) - 1);
if ((ex2 is IOException) && (errorCode == ERROR_SHARING_VIOLATION || errorCode == ERROR_LOCK_VIOLATION))
{
return true;
}
}
finally
{
if (stream != null)
stream.Close();
}
}
return false;
}
From the documentation for FileSystemWatcher:
The OnCreated event is raised as soon as a file is created. If a file
is being copied or transferred into a watched directory, the
OnCreated event will be raised immediately, followed by one or more
OnChanged events.
So, if the copy fails, (catch the exception), add it to a list of files that still need to be moved, and attempt the copy during the OnChanged event. Eventually, it should work.
Something like (incomplete; catch specific exceptions, initialize variables, etc):
public static void listener_Created(object sender, FileSystemEventArgs e)
{
Console.WriteLine
(
"File Created:\n"
+ "ChangeType: " + e.ChangeType
+ "\nName: " + e.Name
+ "\nFullPath: " + e.FullPath
);
try {
File.Copy(e.FullPath, #"D:\levani\FolderListenerTest\CopiedFilesFolder\" + e.Name);
}
catch {
_waitingForClose.Add(e.FullPath);
}
Console.Read();
}
public static void listener_Changed(object sender, FileSystemEventArgs e)
{
if (_waitingForClose.Contains(e.FullPath))
{
try {
File.Copy(...);
_waitingForClose.Remove(e.FullPath);
}
catch {}
}
}
It's an old thread, but I'll add some info for other people.
I experienced a similar issue with a program that writes PDF files, sometimes they take 30 seconds to render.. which is the same period that my watcher_FileCreated class waits before copying the file.
The files were not locked.
In this case I checked the size of the PDF and then waited 2 seconds before comparing the new size, if they were unequal the thread would sleep for 30 seconds and try again.
You're actually in luck - the program writing the file locks it, so you can't open it. If it hadn't locked it, you would have copied a partial file, without having any idea there's a problem.
When you can't access a file, you can assume it's still in use (better yet - try to open it in exclusive mode, and see if someone else is currently opening it, instead of guessing from the failure of File.Copy). If the file is locked, you'll have to copy it at some other time. If it's not locked, you can copy it (there's slight potential for a race condition here).
When is that 'other time'? I don't rememeber when FileSystemWatcher sends multiple events per file - check it out, it might be enough for you to simply ignore the event and wait for another one. If not, you can always set up a time and recheck the file in 5 seconds.
Well you already give the answer yourself; you have to wait for the creation of the file to finish. One way to do this is via checking if the file is still in use. An example of this can be found here: Is there a way to check if a file is in use?
Note that you will have to modify this code for it to work in your situation. You might want to have something like (pseudocode):
public static void listener_Created()
{
while CheckFileInUse()
wait 1000 milliseconds
CopyFile()
}
Obviously you should protect yourself from an infinite while just in case the owner application never releases the lock. Also, it might be worth checking out the other events from FileSystemWatcher you can subscribe to. There might be an event which you can use to circumvent this whole problem.
When the file is writing in binary(byte by byte),create FileStream and above solutions Not working,because file is ready and wrotted in every bytes,so in this Situation you need other workaround like this:
Do this when file created or you want to start processing on file
long fileSize = 0;
currentFile = new FileInfo(path);
while (fileSize < currentFile.Length)//check size is stable or increased
{
fileSize = currentFile.Length;//get current size
System.Threading.Thread.Sleep(500);//wait a moment for processing copy
currentFile.Refresh();//refresh length value
}
//Now file is ready for any process!
So, having glanced quickly through some of these and other similar questions I went on a merry goose chase this afternoon trying to solve a problem with two separate programs using a file as a synchronization (and also file save) method. A bit of an unusual situation, but it definitely highlighted for me the problems with the 'check if the file is locked, then open it if it's not' approach.
The problem is this: the file can become locked between the time that you check it and the time you actually open the file. Its really hard to track down the sporadic Cannot copy the file, because it's used by another process error if you aren't looking for it too.
The basic resolution is to just try to open the file inside a catch block so that if its locked, you can try again. That way there is no elapsed time between the check and the opening, the OS does them at the same time.
The code here uses File.Copy, but it works just as well with any of the static methods of the File class: File.Open, File.ReadAllText, File.WriteAllText, etc.
/// <param name="timeout">how long to keep trying in milliseconds</param>
static void safeCopy(string src, string dst, int timeout)
{
while (timeout > 0)
{
try
{
File.Copy(src, dst);
//don't forget to either return from the function or break out fo the while loop
break;
}
catch (IOException)
{
//you could do the sleep in here, but its probably a good idea to exit the error handler as soon as possible
}
Thread.Sleep(100);
//if its a very long wait this will acumulate very small errors.
//For most things it's probably fine, but if you need precision over a long time span, consider
// using some sort of timer or DateTime.Now as a better alternative
timeout -= 100;
}
}
Another small note on parellelism:
This is a synchronous method, which will block its thread both while waiting and while working on the thread. This is the simplest approach, but if the file remains locked for a long time your program may become unresponsive. Parellelism is too big a topic to go into in depth here, (and the number of ways you could set up asynchronous read/write is kind of preposterous) but here is one way it could be parellelized.
public class FileEx
{
public static async void CopyWaitAsync(string src, string dst, int timeout, Action doWhenDone)
{
while (timeout > 0)
{
try
{
File.Copy(src, dst);
doWhenDone();
break;
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
}
public static async Task<string> ReadAllTextWaitAsync(string filePath, int timeout)
{
while (timeout > 0)
{
try {
return File.ReadAllText(filePath);
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
return "";
}
public static async void WriteAllTextWaitAsync(string filePath, string contents, int timeout)
{
while (timeout > 0)
{
try
{
File.WriteAllText(filePath, contents);
return;
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
}
}
And here is how it could be used:
public static void Main()
{
test_FileEx();
Console.WriteLine("Me First!");
}
public static async void test_FileEx()
{
await Task.Delay(1);
//you can do this, but it gives a compiler warning because it can potentially return immediately without finishing the copy
//As a side note, if the file is not locked this will not return until the copy operation completes. Async functions run synchronously
//until the first 'await'. See the documentation for async: https://msdn.microsoft.com/en-us/library/hh156513.aspx
CopyWaitAsync("file1.txt", "file1.bat", 1000);
//this is the normal way of using this kind of async function. Execution of the following lines will always occur AFTER the copy finishes
await CopyWaitAsync("file1.txt", "file1.readme", 1000);
Console.WriteLine("file1.txt copied to file1.readme");
//The following line doesn't cause a compiler error, but it doesn't make any sense either.
ReadAllTextWaitAsync("file1.readme", 1000);
//To get the return value of the function, you have to use this function with the await keyword
string text = await ReadAllTextWaitAsync("file1.readme", 1000);
Console.WriteLine("file1.readme says: " + text);
}
//Output:
//Me First!
//file1.txt copied to file1.readme
//file1.readme says: Text to be duplicated!
You can use the following code to check if the file can be opened with exclusive access (that is, it is not opened by another application). If the file isn't closed, you could wait a few moments and check again until the file is closed and you can safely copy it.
You should still check if File.Copy fails, because another application may open the file between the moment you check the file and the moment you copy it.
public static bool IsFileClosed(string filename)
{
try
{
using (var inputStream = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.None))
{
return true;
}
}
catch (IOException)
{
return false;
}
}
I would like to add an answer here, because this worked for me. I used time delays, while loops, everything I could think of.
I had the Windows Explorer window of the output folder open. I closed it, and everything worked like a charm.
I hope this helps someone.

How to minimize Logging impact

There's already a question about this issue, but it's not telling me what I need to know:
Let's assume I have a web application, and there's a lot of logging on every roundtrip. I don't want to open a debate about why there's so much logging, or how can I do less loggin operations. I want to know what possibilities I have in order to make this logging issue performant and clean.
So far, I've implemented declarative (attribute based) and imperative logging, which seems to be a cool and clean way of doing it... now, what can I do about performance, assuming I can expect those logs to take more time than expected. Is it ok to open a thread and leave that work to it?
Things I would consider:
Use an efficient file format to minimise the quantity of data to be written (e.g. XML and text formats are easy to read but usually terribly inefficient - the same information can be stored in a binary format in a much smaller space). But don't spend lots of CPU time trying to pack data "optimally". Just go for a simple format that is compact but fast to write.
Test use of compression on the logs. This may not be the case with a fast SSD but in most I/O situations the overhead of compressing data is less than the I/O overhead, so compression gives a net gain (although it is a compromise - raising CPU usage to lower I/O usage).
Only log useful information. No matter how important you think everything is, it's likely you can find something to cut out.
Eliminate repeated data. e.g. Are you logging a client's IP address or domain name repeatedly? Can these be reported once for a session and then not repeated? Or can you store them in a map file and use a compact index value whenever you need to reference them? etc
Test whether buffering the logged data in RAM helps improve performance (e.g. writing a thousand 20 byte log records will mean 1,000 function calls and could cause a lot of disk seeking and other write overheads, while writing a single 20,000 byte block in one burst means only one function call and could give significant performance increase and maximise the burst rate you get out to disk). Often writing blocks in sizes like (4k, 16k, 32, 64k) of data works well as it tends to fit the disk and I/O architecture (but check your specific architecture for clues about what sizes might improve efficiency). The down side of a RAM buffer is that if there is a power outage you will lose more data. So you may have to balance performance against robustness.
(Especially if you are buffering...) Dump the information to an in-memory data structure and pass it to another thread to stream it out to disk. This will help stop your primary thread being held up by log I/O. Take care with threads though - for example, you may have to consider how you will deal with times when you are creating data faster than it can be logged for short bursts - do you need to implement a queue, etc?
Are you logging multiple streams? Can these be multiplexed into a single log to possibly reduce disk seeking and the number of open files?
Is there a hardware solution that will give a large bang for your buck? e.g. Have you used SSD or RAID disks? Will dumping the data to a different server help or hinder? It may not always make much sense to spend $10,000 of developer time making something perform better if you can spend $500 to simply upgrade the disk.
I use the code below to Log. It is a singleton that accepts Logging and puts every message into a concurrentqueue. Every two seconds it writes all that has come in to the disk. Your app is now only delayed by the time it takes to put every message in the list. It's my own code, feel free to use it.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Windows.Forms;
namespace FastLibrary
{
public enum Severity : byte
{
Info = 0,
Error = 1,
Debug = 2
}
public class Log
{
private struct LogMsg
{
public DateTime ReportedOn;
public string Message;
public Severity Seriousness;
}
// Nice and Threadsafe Singleton Instance
private static Log _instance;
public static Log File
{
get { return _instance; }
}
static Log()
{
_instance = new Log();
_instance.Message("Started");
_instance.Start("");
}
~Log()
{
Exit();
}
public static void Exit()
{
if (_instance != null)
{
_instance.Message("Stopped");
_instance.Stop();
_instance = null;
}
}
private ConcurrentQueue<LogMsg> _queue = new ConcurrentQueue<LogMsg>();
private Thread _thread;
private string _logFileName;
private volatile bool _isRunning;
public void Message(string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = Severity.Info });
}
public void Message(DateTime time, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = Severity.Info });
}
public void Message(Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = seriousness });
}
public void Message(DateTime time, Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = seriousness });
}
private void Start(string fileName = "", bool oneLogPerProcess = false)
{
_isRunning = true;
// Unique FileName with date in it. And ProcessId so the same process running twice will log to different files
string lp = oneLogPerProcess ? "_" + System.Diagnostics.Process.GetCurrentProcess().Id : "";
_logFileName = fileName == ""
? DateTime.Now.Year.ToString("0000") + DateTime.Now.Month.ToString("00") +
DateTime.Now.Day.ToString("00") + lp + "_" +
System.IO.Path.GetFileNameWithoutExtension(Application.ExecutablePath) + ".log"
: fileName;
_thread = new Thread(LogProcessor);
_thread.IsBackground = true;
_thread.Start();
}
public void Flush()
{
EmptyQueue();
}
private void EmptyQueue()
{
while (_queue.Any())
{
var strList = new List<string>();
//
try
{
// Block concurrent writing to file due to flush commands from other context
lock (_queue)
{
LogMsg l;
while (_queue.TryDequeue(out l)) strList.Add(l.ReportedOn.ToLongTimeString() + "|" + l.Seriousness + "|" + l.Message);
if (strList.Count > 0)
{
System.IO.File.AppendAllLines(_logFileName, strList);
strList.Clear();
}
}
}
catch
{
//ignore errors on errorlogging ;-)
}
}
}
public void LogProcessor()
{
while (_isRunning)
{
EmptyQueue();
// Sleep while running so we write in efficient blocks
if (_isRunning) Thread.Sleep(2000);
else break;
}
}
private void Stop()
{
// This is never called in the singleton.
// But we made it a background thread so all will be killed anyway
_isRunning = false;
if (_thread != null)
{
_thread.Join(5000);
_thread.Abort();
_thread = null;
}
}
}
}
Check if the logger is debug enabled before calling logger.debug, this means your code does not have to evaluate the message string when debug is turned off.
if (_logger.IsDebugEnabled) _logger.Debug($"slow old string {this.foo} {this.bar}");

FileSystemWatcher - is File ready to use

When a file is being copied to the file watcher folder, how can I identify whether the file is completely copied and ready to use? Because I am getting multiple events during file copy. (The file is copied via another program using File.Copy.)
When I ran into this problem, the best solution I came up with was to continually try to get an exclusive lock on the file; while the file is being written, the locking attempt will fail, essentially the method in this answer. Once the file isn't being written to any more, the lock will succeed.
Unfortunately, the only way to do that is to wrap a try/catch around opening the file, which makes me cringe - having to use try/catch is always painful. There just doesn't seem to be any way around that, though, so it's what I ended up using.
Modifying the code in that answer does the trick, so I ended up using something like this:
private void WaitForFile(FileInfo file)
{
FileStream stream = null;
bool FileReady = false;
while(!FileReady)
{
try
{
using(stream = file.Open(FileMode.Open, FileAccess.ReadWrite, FileShare.None))
{
FileReady = true;
}
}
catch (IOException)
{
//File isn't ready yet, so we need to keep on waiting until it is.
}
//We'll want to wait a bit between polls, if the file isn't ready.
if(!FileReady) Thread.Sleep(1000);
}
}
Here is a method that will retry file access up to X number of times, with a Sleep between tries. If it never gets access, the application moves on:
private static bool GetIdleFile(string path)
{
var fileIdle = false;
const int MaximumAttemptsAllowed = 30;
var attemptsMade = 0;
while (!fileIdle && attemptsMade <= MaximumAttemptsAllowed)
{
try
{
using (File.Open(path, FileMode.Open, FileAccess.ReadWrite))
{
fileIdle = true;
}
}
catch
{
attemptsMade++;
Thread.Sleep(100);
}
}
return fileIdle;
}
It can be used like this:
private void WatcherOnCreated(object sender, FileSystemEventArgs e)
{
if (GetIdleFile(e.FullPath))
{
// Do something like...
foreach (var line in File.ReadAllLines(e.FullPath))
{
// Do more...
}
}
}
I had this problem when writing a file. I got events before the file was fully written and closed.
The solution is to use a temporary filename and rename the file once finished. Then watch for the file rename event instead of file creation or change event.
Note: this problem is not solvable in generic case. Without prior knowledge about file usage you can't know if other program(s) finished operation with the file.
In your particular case you should be able to figure out what operations File.Copy consist of.
Most likely destination file is locked during whole operation. In this case you should be able to simply try to open file and handle "sharing mode violation" exception.
You can also wait for some time... - very unreliable option, but if you know size range of files you may be able to have reasonable delay to let Copy to finish.
You can also "invent" some sort of transaction system - i.e. create another file like "destination_file_name.COPYLOCK" which program that copies file would create before copying "destination_file_name" and delete afterward.
private Stream ReadWhenAvailable(FileInfo finfo, TimeSpan? ts = null) => Task.Run(() =>
{
ts = ts == null ? new TimeSpan(long.MaxValue) : ts;
var start = DateTime.Now;
while (DateTime.Now - start < ts)
{
Thread.Sleep(200);
try
{
return new FileStream(finfo.FullName, FileMode.Open);
}
catch { }
}
return null;
})
.Result;
...of course, you can modify aspects of this to suit your needs.
One possible solution (It worked in my case) is to use the Change event. You can log in the create event the name of the file just created and then catch the change event and verify if the file was just created. When I manipulated the file in the change event it didn't throw me the error "File is in use"
If you are doing some sort of inter-process communication, as I do, you may want to consider this solution:
App A writes the file you are interested in, eg "Data.csv"
When done, app A writes a 2nd file, eg. "Data.confirmed"
In your C# app B make the FileWatcher listen to "*.confirmed" files. When you get this event you can safely read "Data.csv", as it is already completed by app A.
(Edit: inspired by commets) Delete the *.confirmed filed with app B when done processing the "Data.csv" file.
I have solved this issue with two features:
Implement the MemoryCache pattern seen in this question: A robust solution for FileSystemWatcher firing events multiple times
Implement a try\catch loop with a timeout for access
You need to collect average copy times in your environment and set the memory cache timeout to be at least as long as the shortest lock time on a new file. This eliminates duplicates in your processing directive and allows some time for the copy to finish. You will have much better success on first try, which means less time spent in the try\catch loop.
Here is an example of the try\catch loop:
public static IEnumerable<string> GetFileLines(string theFile)
{
DateTime startTime = DateTime.Now;
TimeSpan timeOut = TimeSpan.FromSeconds(TimeoutSeconds);
TimeSpan timePassed;
do
{
try
{
return File.ReadLines(theFile);
}
catch (FileNotFoundException ex)
{
EventLog.WriteEntry(ProgramName, "File not found: " + theFile, EventLogEntryType.Warning, ex.HResult);
return null;
}
catch (PathTooLongException ex)
{
EventLog.WriteEntry(ProgramName, "Path too long: " + theFile, EventLogEntryType.Warning, ex.HResult);
return null;
}
catch (DirectoryNotFoundException ex)
{
EventLog.WriteEntry(ProgramName, "Directory not found: " + theFile, EventLogEntryType.Warning, ex.HResult);
return null;
}
catch (Exception ex)
{
// We swallow all other exceptions here so we can try again
EventLog.WriteEntry(ProgramName, ex.Message, EventLogEntryType.Warning, ex.HResult);
}
Task.Delay(777).Wait();
timePassed = DateTime.Now.Subtract(startTime);
}
while (timePassed < timeOut);
EventLog.WriteEntry(ProgramName, "Timeout after waiting " + timePassed.ToString() + " seconds to read " + theFile, EventLogEntryType.Warning, 258);
return null;
}
Where TimeoutSeconds is a setting that you can put wherever you hold your settings. This can be tuned for your environment.

Categories

Resources