How to minimize Logging impact - c#

There's already a question about this issue, but it's not telling me what I need to know:
Let's assume I have a web application, and there's a lot of logging on every roundtrip. I don't want to open a debate about why there's so much logging, or how can I do less loggin operations. I want to know what possibilities I have in order to make this logging issue performant and clean.
So far, I've implemented declarative (attribute based) and imperative logging, which seems to be a cool and clean way of doing it... now, what can I do about performance, assuming I can expect those logs to take more time than expected. Is it ok to open a thread and leave that work to it?

Things I would consider:
Use an efficient file format to minimise the quantity of data to be written (e.g. XML and text formats are easy to read but usually terribly inefficient - the same information can be stored in a binary format in a much smaller space). But don't spend lots of CPU time trying to pack data "optimally". Just go for a simple format that is compact but fast to write.
Test use of compression on the logs. This may not be the case with a fast SSD but in most I/O situations the overhead of compressing data is less than the I/O overhead, so compression gives a net gain (although it is a compromise - raising CPU usage to lower I/O usage).
Only log useful information. No matter how important you think everything is, it's likely you can find something to cut out.
Eliminate repeated data. e.g. Are you logging a client's IP address or domain name repeatedly? Can these be reported once for a session and then not repeated? Or can you store them in a map file and use a compact index value whenever you need to reference them? etc
Test whether buffering the logged data in RAM helps improve performance (e.g. writing a thousand 20 byte log records will mean 1,000 function calls and could cause a lot of disk seeking and other write overheads, while writing a single 20,000 byte block in one burst means only one function call and could give significant performance increase and maximise the burst rate you get out to disk). Often writing blocks in sizes like (4k, 16k, 32, 64k) of data works well as it tends to fit the disk and I/O architecture (but check your specific architecture for clues about what sizes might improve efficiency). The down side of a RAM buffer is that if there is a power outage you will lose more data. So you may have to balance performance against robustness.
(Especially if you are buffering...) Dump the information to an in-memory data structure and pass it to another thread to stream it out to disk. This will help stop your primary thread being held up by log I/O. Take care with threads though - for example, you may have to consider how you will deal with times when you are creating data faster than it can be logged for short bursts - do you need to implement a queue, etc?
Are you logging multiple streams? Can these be multiplexed into a single log to possibly reduce disk seeking and the number of open files?
Is there a hardware solution that will give a large bang for your buck? e.g. Have you used SSD or RAID disks? Will dumping the data to a different server help or hinder? It may not always make much sense to spend $10,000 of developer time making something perform better if you can spend $500 to simply upgrade the disk.

I use the code below to Log. It is a singleton that accepts Logging and puts every message into a concurrentqueue. Every two seconds it writes all that has come in to the disk. Your app is now only delayed by the time it takes to put every message in the list. It's my own code, feel free to use it.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Windows.Forms;
namespace FastLibrary
{
public enum Severity : byte
{
Info = 0,
Error = 1,
Debug = 2
}
public class Log
{
private struct LogMsg
{
public DateTime ReportedOn;
public string Message;
public Severity Seriousness;
}
// Nice and Threadsafe Singleton Instance
private static Log _instance;
public static Log File
{
get { return _instance; }
}
static Log()
{
_instance = new Log();
_instance.Message("Started");
_instance.Start("");
}
~Log()
{
Exit();
}
public static void Exit()
{
if (_instance != null)
{
_instance.Message("Stopped");
_instance.Stop();
_instance = null;
}
}
private ConcurrentQueue<LogMsg> _queue = new ConcurrentQueue<LogMsg>();
private Thread _thread;
private string _logFileName;
private volatile bool _isRunning;
public void Message(string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = Severity.Info });
}
public void Message(DateTime time, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = Severity.Info });
}
public void Message(Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = seriousness });
}
public void Message(DateTime time, Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = seriousness });
}
private void Start(string fileName = "", bool oneLogPerProcess = false)
{
_isRunning = true;
// Unique FileName with date in it. And ProcessId so the same process running twice will log to different files
string lp = oneLogPerProcess ? "_" + System.Diagnostics.Process.GetCurrentProcess().Id : "";
_logFileName = fileName == ""
? DateTime.Now.Year.ToString("0000") + DateTime.Now.Month.ToString("00") +
DateTime.Now.Day.ToString("00") + lp + "_" +
System.IO.Path.GetFileNameWithoutExtension(Application.ExecutablePath) + ".log"
: fileName;
_thread = new Thread(LogProcessor);
_thread.IsBackground = true;
_thread.Start();
}
public void Flush()
{
EmptyQueue();
}
private void EmptyQueue()
{
while (_queue.Any())
{
var strList = new List<string>();
//
try
{
// Block concurrent writing to file due to flush commands from other context
lock (_queue)
{
LogMsg l;
while (_queue.TryDequeue(out l)) strList.Add(l.ReportedOn.ToLongTimeString() + "|" + l.Seriousness + "|" + l.Message);
if (strList.Count > 0)
{
System.IO.File.AppendAllLines(_logFileName, strList);
strList.Clear();
}
}
}
catch
{
//ignore errors on errorlogging ;-)
}
}
}
public void LogProcessor()
{
while (_isRunning)
{
EmptyQueue();
// Sleep while running so we write in efficient blocks
if (_isRunning) Thread.Sleep(2000);
else break;
}
}
private void Stop()
{
// This is never called in the singleton.
// But we made it a background thread so all will be killed anyway
_isRunning = false;
if (_thread != null)
{
_thread.Join(5000);
_thread.Abort();
_thread = null;
}
}
}
}

Check if the logger is debug enabled before calling logger.debug, this means your code does not have to evaluate the message string when debug is turned off.
if (_logger.IsDebugEnabled) _logger.Debug($"slow old string {this.foo} {this.bar}");

Related

C# Exception, file being use by another process [duplicate]

Writing Stringbuilder to file asynchronously. This code takes control of a file, writes a stream to it and releases it. It deals with requests from asynchronous operations, which may come in at any time.
The FilePath is set per class instance (so the lock Object is per instance), but there is potential for conflict since these classes may share FilePaths. That sort of conflict, as well as all other types from outside the class instance, would be dealt with retries.
Is this code suitable for its purpose? Is there a better way to handle this that means less (or no) reliance on the catch and retry mechanic?
Also how do I avoid catching exceptions that have occurred for other reasons.
public string Filepath { get; set; }
private Object locker = new Object();
public async Task WriteToFile(StringBuilder text)
{
int timeOut = 100;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
while (true)
{
try
{
//Wait for resource to be free
lock (locker)
{
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
}
break;
}
catch
{
//File not available, conflict with other class instances or application
}
if (stopwatch.ElapsedMilliseconds > timeOut)
{
//Give up.
break;
}
//Wait and Retry
await Task.Delay(5);
}
stopwatch.Stop();
}
How you approach this is going to depend a lot on how frequently you're writing. If you're writing a relatively small amount of text fairly infrequently, then just use a static lock and be done with it. That might be your best bet in any case because the disk drive can only satisfy one request at a time. Assuming that all of your output files are on the same drive (perhaps not a fair assumption, but bear with me), there's not going to be much difference between locking at the application level and the lock that's done at the OS level.
So if you declare locker as:
static object locker = new object();
You'll be assured that there are no conflicts with other threads in your program.
If you want this thing to be bulletproof (or at least reasonably so), you can't get away from catching exceptions. Bad things can happen. You must handle exceptions in some way. What you do in the face of error is something else entirely. You'll probably want to retry a few times if the file is locked. If you get a bad path or filename error or disk full or any of a number of other errors, you probably want to kill the program. Again, that's up to you. But you can't avoid exception handling unless you're okay with the program crashing on error.
By the way, you can replace all of this code:
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
With a single call:
File.AppendAllText(Filepath, text.ToString());
Assuming you're using .NET 4.0 or later. See File.AppendAllText.
One other way you could handle this is to have the threads write their messages to a queue, and have a dedicated thread that services that queue. You'd have a BlockingCollection of messages and associated file paths. For example:
class LogMessage
{
public string Filepath { get; set; }
public string Text { get; set; }
}
BlockingCollection<LogMessage> _logMessages = new BlockingCollection<LogMessage>();
Your threads write data to that queue:
_logMessages.Add(new LogMessage("foo.log", "this is a test"));
You start a long-running background task that does nothing but service that queue:
foreach (var msg in _logMessages.GetConsumingEnumerable())
{
// of course you'll want your exception handling in here
File.AppendAllText(msg.Filepath, msg.Text);
}
Your potential risk here is that threads create messages too fast, causing the queue to grow without bound because the consumer can't keep up. Whether that's a real risk in your application is something only you can say. If you think it might be a risk, you can put a maximum size (number of entries) on the queue so that if the queue size exceeds that value, producers will wait until there is room in the queue before they can add.
You could also use ReaderWriterLock, it is considered to be more 'appropriate' way to control thread safety when dealing with read write operations...
To debug my web apps (when remote debug fails) I use following ('debug.txt' end up in \bin folder on the server):
public static class LoggingExtensions
{
static ReaderWriterLock locker = new ReaderWriterLock();
public static void WriteDebug(string text)
{
try
{
locker.AcquireWriterLock(int.MaxValue);
System.IO.File.AppendAllLines(Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Replace("file:\\", ""), "debug.txt"), new[] { text });
}
finally
{
locker.ReleaseWriterLock();
}
}
}
Hope this saves you some time.

Throttling file operations

I have a byte array that I want to persist in a file. But, I don't want to write to file each time it is updated because it can be updated very frequently. Currently I am planning to use an approach similar to following;
class ThrottleTest
{
private byte[] _internal_data = new byte[256];
CancellationTokenSource _cancel_saving = new CancellationTokenSource();
public void write_to_file()
{
Task.Delay(1000).ContinueWith((task) =>
{
File.WriteAllBytes("path/to/file.data", _internal_data);
}, _cancel_saving.Token);
}
public void operation_that_update_internal_data()
{
// cancel writing previous update
_cancel_saving.Cancel();
/*
* operate on _internal_data
*/
write_to_file();
}
public void another_operation_that_update_internal_data()
{
// cancel writing previous update
_cancel_saving.Cancel();
/*
* operate on _internal_data
*/
write_to_file();
}
}
I don't think this approach would work, because, when I cancel the token once, it will be canceled forever, so it will never write to the file.
First of all, I was wondering if I am on the right track here, and above code can be made to work. If not, what would be the best approach to achieve this behaviour. Moreover, is there a practical way to generalize it to Dictionary<string,byte[]>, where any byte[] can be modified independently?
I would start writing to file by cancelling first the previous operation.
I would also include the cancellation token in the Delay task.
CancellationTokenSource _cancel_saving;
public void write_to_file()
{
_cancel_saving?.Cancel();
_cancel_saving = new CancellationTokenSource();
Task.Delay(1000, _cancel_saving.Token).ContinueWith((task) =>
{
File.WriteAllBytes("path/to/file.data", _internal_data);
}, _cancel_saving.Token);
}
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
public class ThrottleTest
{
private byte[] _internal_data = new byte[256];
private Subject<Unit> _write_to_file = new Subject<Unit>();
public ThrottleTest()
{
_write_to_file
.Throttle(TimeSpan.FromSeconds(1.0))
.Subscribe(_ => File.WriteAllBytes("path/to/file.data", _internal_data));
}
public void write_to_file()
{
_write_to_file.OnNext(Unit.Default);
}
public void operation_that_update_internal_data()
{
/*
* operate on _internal_data
*/
write_to_file();
}
public void another_operation_that_update_internal_data()
{
/*
* operate on _internal_data
*/
write_to_file();
}
}
Your context seems a little odd to me. You are writing all of the bytes, and not using a stream. Putting aside your issue with the cancellation token, delaying a write by 1 second won't reduce the overall load or overall throughput to disk.
This answer has the following assumptions:
You are using an SSD and are concerned about hardware lifetime
This is a low priority activity, where some data loss will be tolerated
This is not a logging activity (otherwise an append to file would work better with a BufferedStream)
This is likely the saving of a serialized C# object tree to disk in case the power goes out
You don't want every change made to the object tree to result in a write to disk.
You don't want to write to disk every second if there has been no change to the object tree.
It should write to disk right away if there hasn't been a write for N seconds
It should wait if there has been a write recently.
Having the WriteAllBytes step as the throttle point is not ideal.
Usage:
rootObject.subObject.value = 9;
rootObject.Save(token);
Support code:
TimeSpan minimumDiskInterval = TimeSpan.FromSeconds(60);
DateTime lastSaveAt = DateTime.MinValue;
bool alreadyQueued = false;
public void Save(CancellationToken token)
{
if (alreadyQueued) //TODO: Interlocked with long instead for atomic memory barrier
return;
alreadyQueued = true; //TODO: Interlocked with long instead for atomic memory barrier
var thisSaveAt = DateTime.UtcNow;
var sinceLastSave = thisSaveAt.Subtract(lastSaveAt);
var difference = TimeSpan.TotalSeconds - sinceLastSave.TotalSeconds;
if (difference < 0)
{
//It has been a while - no need to delay
SaveNow();
}
else
{
//It was done recently
T Task.Delay(TimeSpan.FromSeconds(difference).ContinueWith((task) =>
{
SaveNow();
}, _cancel_saving.Token);
}
}
object fileAccessSync = new object();
public void SaveNow()
{
alreadyQueued = false; //TODO: Interlocked with long instead for atomic memory barrier
byte[] serializedBytes = Serialise(this)
lock (fileAccessSync)
{
File.WriteAllBytes("path/to/file.data", serializedBytes);
}
}

C# Best Practice used for application multithreading log building cross-form instance

I'm building a UI which consists of one main Form with possible instances of additional forms and custom classes. What I'm still missing is a consistent way of logging errors. So what I do is I created try-catch blocks around all code that could generate errors, mainly the things that process incoming data. I'm receiving a constant data flow (JSON) from some site, so the built in threading functionality of the framework makes it a multi threading application. Again, the multi threading part is the built-in functionality, I'm not doing this myself actively, since I'm not that smart yet, from a C# point of view. ;)
For the logging part, I've got the code below from here. Even though I'm not so smart yet, I do think I actually understand what is going on there. My concern/question however, is this: how do I implement a Multi-Threading logging mechanism that writes errors to ONE log file cross-form cross-class.
Here is an example that you can use a reference:
// MyMainForm.cs
namespace MyNameSpace
{
public partial class MyMainForm : Form
{
FooClass MyClass = new FooClass(); //<< errors could occur here
Form f = new MyForm(); //<< errors could occur here
... //<< errors could occur here
}
}
// FooClass.cs
namespace MyNameSpace
{
public class FooClass
{
public string ErrorGeneratingMethod()
{
try...catch(Exception e) { /* Write to Log file */ }
}
}
}
// Don't really know where to put this...
private static ReaderWriterLockSlim _readWriteLock = new ReaderWriterLockSlim();
public void WriteToFileThreadSafe(string text, string context)
{
string t = DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture);
string path = Properties.Settings.Default.FQLogFileLocation;
// Set Status to Locked
_readWriteLock.EnterWriteLock();
try
{
// Append text to the file
using (StreamWriter sw = File.AppendText(path))
{
sw.WriteLine("[" + t + "]["+ context + "]" + text);
sw.Close();
}
} catch (Exception e)
{
MessageBox.Show(e.Message); // Really Exceptional (should never happen)
}
finally
{
// Release lock
_readWriteLock.ExitWriteLock();
}
}
So basically what is important for me to know is where do I put WriteToFileThreadSafe() together with _readWriteLock?
And how do I safely use this function in multiple threads in multiple forms and classes?
Thanks a lot in advance for letting me learn from you guru's :)

Multiple publishers sending concurrent messages to a single subscriber in Retlang?

I need to build an application where some number of instances of an object are generating "pulses", concurrently. (Essentially this just means that they are incrementing a counter.) I also need to track the total counters for each object. Also, whenever I perform a read on a counter, it needs to be reset to zero.
So I was talking to a guy at work, and he mentioned Retlang and message-based concurrency, which sounded super interesting. But obviously I am very new to the concept. So I've built a small prototype, and I get the expected results, which is awesome - but I'm not sure if I've potentially made some logical errors and left the software open to bugs, due to my inexperience with Retlang and concurrent programming in general.
First off, I have these classes:
public class Plc {
private readonly IChannel<Pulse> _channel;
private readonly IFiber _fiber;
private readonly int _pulseInterval;
private readonly int _plcId;
public Plc(IChannel<Pulse> channel, int plcId, int pulseInterval) {
_channel = channel;
_pulseInterval = pulseInterval;
_fiber = new PoolFiber();
_plcId = plcId;
}
public void Start() {
_fiber.Start();
// Not sure if it's safe to pass in a delegate which will run in an infinite loop...
// AND use a shared channel object...
_fiber.Enqueue(() => {
SendPulse();
});
}
private void SendPulse() {
while (true) {
// Not sure if it's safe to use the same channel object in different
// IFibers...
_channel.Publish(new Pulse() { PlcId = _plcId });
Thread.Sleep(_pulseInterval);
}
}
}
public class Pulse {
public int PlcId { get; set; }
}
The idea here is that I can instantiate multiple Plcs, pass each one the same IChannel, and then have them execute the SendPulse function concurrently, which would allow each one to publish to the same channel. But as you can see from my comments, I'm a little skeptical that what I'm doing is actually legit. I'm mostly worried about using the same IChannel object to Publish in the context of different IFibers, but I'm also worried about never returning from the delegate that was passed to Enqueue. I'm hoping some one can provide some insight as to how I should be handling this.
Also, here is the "subscriber" class:
public class PulseReceiver {
private int[] _pulseTotals;
private readonly IFiber _fiber;
private readonly IChannel<Pulse> _channel;
private object _pulseTotalsLock;
public PulseReceiver(IChannel<Pulse> channel, int numberOfPlcs) {
_pulseTotals = new int[numberOfPlcs];
_channel = channel;
_fiber = new PoolFiber();
_pulseTotalsLock = new object();
}
public void Start() {
_fiber.Start();
_channel.Subscribe(_fiber, this.UpdatePulseTotals);
}
private void UpdatePulseTotals(Pulse pulse) {
// This occurs in the execution context of the IFiber.
// If we were just dealing with the the published Pulses from the channel, I think
// we wouldn't need the lock, since I THINK the published messages would be taken
// from a queue (i.e. each Plc is publishing concurrently, but Retlang enqueues
// the messages).
lock(_pulseTotalsLock) {
_pulseTotals[pulse.PlcId - 1]++;
}
}
public int GetTotalForPlc(int plcId) {
// However, this access takes place in the application thread, not in the IFiber,
// and I think there could potentially be a race condition here. I.e. the array
// is being updated from the IFiber, but I think I'm reading from it and resetting values
// concurrently in a different thread.
lock(_pulseTotalsLock) {
if (plcId <= _pulseTotals.Length) {
int currentTotal = _pulseTotals[plcId - 1];
_pulseTotals[plcId - 1] = 0;
return currentTotal;
}
}
return -1;
}
}
So here, I am reusing the same IChannel that was given to the Plc instances, but having a different IFiber subscribe to it. Ideally then I could receive the messages from each Plc, and update a single private field within my class, but in a thread safe way.
From what I understand (and I mentioned in my comments), I think that I would be safe to simply update the _pulseTotals array in the delegate which I gave to the Subscribe function, because I would receive each message from the Plcs serially.
However, I'm not sure how best to handle the bit where I need to read the totals and reset them. As you can see from the code and comments, I ended up wrapping a lock around any access to the _pulseTotals array. But I'm not sure if this is necessary, and I would love to know a) if it is in fact necessary to do this, and why, or b) the correct way to implement something similar.
And finally for good measure, here's my main function:
static void Main(string[] args) {
Channel<Pulse> pulseChannel = new Channel<Pulse>();
PulseReceiver pulseReceiver = new PulseReceiver(pulseChannel, 3);
pulseReceiver.Start();
List<Plc> plcs = new List<Plc>() {
new Plc(pulseChannel, 1, 500),
new Plc(pulseChannel, 2, 250),
new Plc(pulseChannel, 3, 1000)
};
plcs.ForEach(plc => plc.Start());
while (true) {
Thread.Sleep(10000);
Console.WriteLine(string.Format("Plc 1: {0}\nPlc 2: {1}\nPlc 3: {2}\n", pulseReceiver.GetTotalForPlc(1), pulseReceiver.GetTotalForPlc(2), pulseReceiver.GetTotalForPlc(3)));
}
}
I instantiate one single IChannel, pass it to everything, where internally the Receiver subscribes with an IFiber, and where the Plcs use IFibers to "enqueue" a non-returning method which continually publishes to the channel.
Again, the console output looks exactly like I would expect it to look, i.e. I see 20 "pulses" for Plc 1 after waiting 10 seconds. And the resetting of the counters after a read also seems to work, i.e. Plc 1 has 20 "pulses" after each 10 second increment. But that doesn't reassure me that I haven't overlooked something important.
I'm really excited to learn a bit more about Retlang and concurrent programming techniques, so hopefuly someone has the time to sift through my code and offer some suggestions for my specific concerns, or else even a different design based on my requirements!

Static Logger in separate thread?

I've made my Logger, that logs a string, a static class with a static
so I can call it from my entire project without having to make an instance of it.
quite nice, but I want to make it run in a separate thread, since accessing the file costs time
is that possible somehow and what's the best way to do it?
Its a bit of a short description, but I hope the idea is clear. if not, please let me know.
Thanks in advance!
By the way any other improvements on my code are welcome as well, I have the feeling not everything is as efficient as it can be:
internal static class MainLogger
{
internal static void LogStringToFile(string logText)
{
DateTime timestamp = DateTime.Now;
string str = timestamp.ToString("dd-MM-yy HH:mm:ss ", CultureInfo.InvariantCulture) + "\t" + logText + "\n";
const string filename = Constants.LOG_FILENAME;
FileInfo fileInfo = new FileInfo(filename);
if (fileInfo.Exists)
{
if (fileInfo.Length > Constants.LOG_FILESIZE)
{
File.Create(filename).Dispose();
}
}
else
{
File.Create(filename).Dispose();
}
int i = 0;
while(true)
{
try
{
using (StreamWriter writer = File.AppendText(filename))
{
writer.WriteLine(str);
}
break;
}
catch (IOException)
{
Thread.Sleep(10);
i++;
if (i >= 8)
{
throw new IOException("Log file \"" + Constants.LOG_FILENAME + "\" not accessible after 5 tries");
}
}
}
}
}
enter code here
If you're doing this as an exercise (just using a ready made logger isn't an option) you could try a producer / consumer system.
Either make an Init function for your logger, or use the static constructor - inside it, launch a new System.Threading.Thread, which just runs through a while(true) loop.
Create a new Queue<string> and have your logging function enqueue onto it.
Your while(true) loop looks for items on the queue, dequeues them, and logs them.
Make sure you lock your queue before doing anything with it on either thread.
sry, but you may not reinvent the wheel:
choose log4net (or any other (enterprise) logging-engine) as your logger!
Ok, simply put you need to create a ThreadSafe static class. Below are some code snippets, a delegate that you call from any thread, this points to the correct thread, which then invokes the WriteToFile function.
When you start the application that you want to log against, pass it the following, where LogFile is the filename and path of your log file.
Log.OnNewLogEntry += Log.WriteToFile (LogFile, Program.AppName);
Then you want to put this inside your static Logging class. The wizard bit is the ThreadSafeAddEntry function, this will make sure you are in the correct Thread for writing the line of code away.
public delegate void AddEntryDelegate(string entry, bool error);
public static Form mainwin;
public static event AddEntryDelegate OnNewLogEntry;
public static void AddEntry(string entry) {
ThreadSafeAddEntry( entry, false );
}
private static void ThreadSafeAddEntry (string entry, bool error)
{
try
{
if (mainwin != null && mainwin.InvokeRequired) // we are in a different thread to the main window
mainwin.Invoke (new AddEntryDelegate (ThreadSafeAddEntry), new object [] { entry, error }); // call self from main thread
else
OnNewLogEntry (entry, error);
}
catch { }
}
public static AddEntryDelegate WriteToFile(string filename, string appName) {
//Do your WriteToFile work here
}
}
And finally to write a line...
Log.AddEntry ("Hello World!");
What you have in this case is a typical producer consumer scenario - many threads produce log entries and one thread writes them out to a file. The MSDN has an article with sample code for this scenario.
For starters, your logging mechanism should generally avoid throwing exceptions. Frequently logging mechanisms are where errors get written to, so things get ugly when they also start erroring.
I would look into the BackgroundWorker class, as it allows you to fork off threads that can do the logging for you. That way your app isn't slowed down, and any exceptions raised are simply ignored.

Categories

Resources