Tying it all together, Directory Recursive Search + Events + Threading - c#

What I would like to do, and have worked towards developing, is a standard class which I can use for retrieving all sub-directories (and their sub directories and files, and so on) and files.
WalkthroughDir(Dir)
Files a
Folders b
WalkthroughDir(b[i])
A straightforward recursive directory search.
Using this as a basis I wanted to extend it to fire events when:
A file is found;
A directory is found;
The search is completed
private void GetDirectories(string path)
{
GetFiles(path);
foreach (string dir in Directory.EnumerateDirectories(path))
{
if (DirectoryFound != null)
{
IOEventArgs<DirectoryInfo> args = new IOEventArgs<DirectoryInfo>(new DirectoryInfo(dir));
DirectoryFound(this, args);
}
// do something with the directory...
GetDirectories(dir, dirNode);
}
}
private void GetFiles(string path)
{
foreach (string file in Directory.EnumerateFiles(path))
{
if (FileFound != null)
{
IOEventArgs<FileInfo> args = new IOEventArgs<FileInfo>(new FileInfo(file));
FileFound(this, args);
}
// do something with the file...
}
}
Where you find the comments above ("do something[...]") is where I might add the file or directory to some data structure.
The most common factor in doing this type of search though is the processing time, particularly for large directories. So naturally I wanted to take this yet another step forward and implement threading. Now, my knowledge of threading is pretty limited but so far this is an outline of what I've come up with:
public void Search()
{
m_searchThread = new Thread(new ThreadStart(SearchThread));
m_searching = true;
m_searchThread.Start();
}
private void SearchThread()
{
GetDirectories(m_path);
m_searching = false;
}
If I use this implementation, assign the events in a control it throws errors (as I expected) that my GUI application is trying to access another thread.
Could anyone feedback on this implementation as well as how to accomplish the threading. Thanks.
UPDATE (selkathguy recommendation):
This is the adjusted code following selkathguy's recommendation:
private void GetDirectories(DirectoryInfo path)
{
GetFiles(path);
foreach (DirectoryInfo dir in path.GetDirectories())
{
if (DirectoryFound != null)
{
IOEventArgs<DirectoryInfo> args = new IOEventArgs<DirectoryInfo>(dir);
DirectoryFound(this, args);
}
// do something with the directory...
GetDirectories(dir);
}
}
private void GetFiles(DirectoryInfo path)
{
foreach (FileInfo file in path.GetFiles())
{
if (FileFound != null)
{
IOEventArgs<FileInfo> args = new IOEventArgs<FileInfo>(file);
FileFound(this, args);
}
// do something with the file...
}
}
Original code time taken: 47.87s
Altered code time taken: 46.14s

To address the first part of your request about raising your own events from the standard class: you can create a delegate to which other methods can be hooked as callbacks for the event. Please see http://msdn.microsoft.com/en-us/library/aa645739(v=vs.71).aspx as a good resource. It's fairly trivial to implement.
As for threading, I believe that would be unnecessary at least for your performance concerns. Most of the bottleneck of performance for recursively checking directories is waiting for the node information to load from the disk. Relatively speaking, this is what takes all of your time, as fetching a directory info is a blocking process. Making numerous threads all checking different directories can easily slow down the overall speed of your search, and it tremendously complicates your application with the management of the worker threads and delegation of work shares. With that said, having a thread per disk might be desirable if your search spans multiple disks or resource locations.
I have found that something as simple as recursion using DirectoryInfo.GetDirectories() was one of the fastest solutions, as it takes advantage of the caching that Windows already does. A search application I made using it can search tens of thousands of filenames and directory names per second.

Related

Reading Windows Logs efficiently and fast

What I'm trying to accomplish is a C# application that will read logs from the Windows Event Logs and store them somewhere else. This has to be fast, since some of the devices where it will be installed generate a high amount of logs/s.
I have tried three approaches so far:
Local WMI: it didn't work good, there are too many errors and exceptions caused by the size of the collections that need to be loaded.
EventLogReader: I though this was the perfect solution, since it allows you to query the event log however you like by using XPath expressions. The problem is that when you want to get the content of the message for each log (by calling FormatDescription()) takes way too much time for long collections.
E.g: I can read 12k logs in 0.11s if I just go over them.
If I add a line to store the message for each log, it takes nearly 6 minutes to complete exactly the same operation, which is totally crazy for such a low number of logs.
I don't know if there's any kind of optimization that might be done to EventLogReader in order to get the message faster, I couldn't find anything either on MS documentation nor on the Internet.
I also found that you can read the log entries by using a class called EventLog. However, this technology does not allow you to enter any kind of filters so you basically have to load the entire list of logs to memory and then filter it out according to your needs.
Here's an example:
EventLog eventLog = EventLog.GetEventLogs().FirstOrDefault(el => el.Log.Equals("Security", StringComparison.OrdinalIgnoreCase));
var newEntries = (from entry in eventLog.Entries.OfType()
orderby entry.TimeWritten ascending
where entry.TimeWritten > takefrom
select entry);
Despite of being faster in terms of getting the message, the use of memory might be high and I don't want to cause any issues on the devices where this solution will get deployed.
Can anybody help me with this? I cannot find any workarounds or approaches to achieve something like this.
Thank you!.
You can give the EventLogReader class a try. See https://learn.microsoft.com/en-us/previous-versions/bb671200(v=vs.90).
It is better than the EventLog class because accessing the EventLog.Entries collection has the nasty property that its count can change while you are reading from it. What is even worse is that the reading happens on an IO threadpool thread which will let your application crash with an unhandled exception. At least that was the case some years ago.
The EventLogReader also gives you the ability to supply a query string to filter for the events you are interested in. That is the way to go if you write a new application.
Here is an application which shows how you can parallelize reading:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.Eventing.Reader;
using System.Linq;
using System.Threading.Tasks;
namespace EventLogReading
{
class Program
{
static volatile bool myHasStoppedReading = false;
static void ParseEventsParallel()
{
var sw = Stopwatch.StartNew();
var query = new EventLogQuery("Application", PathType.LogName, "*");
const int BatchSize = 100;
ConcurrentQueue<EventRecord> events = new ConcurrentQueue<EventRecord>();
var readerTask = Task.Factory.StartNew(() =>
{
using (EventLogReader reader = new EventLogReader(query))
{
EventRecord ev;
bool bFirst = true;
int count = 0;
while ((ev = reader.ReadEvent()) != null)
{
if ( count % BatchSize == 0)
{
events.Enqueue(ev);
}
count++;
}
}
myHasStoppedReading = true;
});
ConcurrentQueue<KeyValuePair<string, EventRecord>> eventsWithStrings = new ConcurrentQueue<KeyValuePair<string, EventRecord>>();
Action conversion = () =>
{
EventRecord ev = null;
using (var reader = new EventLogReader(query))
{
while (!myHasStoppedReading || events.TryDequeue(out ev))
{
if (ev != null)
{
reader.Seek(ev.Bookmark);
for (int i = 0; i < BatchSize; i++)
{
ev = reader.ReadEvent();
if (ev == null)
{
break;
}
eventsWithStrings.Enqueue(new KeyValuePair<string, EventRecord>(ev.FormatDescription(), ev));
}
}
}
}
};
Parallel.Invoke(Enumerable.Repeat(conversion, 8).ToArray());
sw.Stop();
Console.WriteLine($"Got {eventsWithStrings.Count} events with strings in {sw.Elapsed.TotalMilliseconds:N3}ms");
}
static void ParseEvents()
{
var sw = Stopwatch.StartNew();
List<KeyValuePair<string, EventRecord>> parsedEvents = new List<KeyValuePair<string, EventRecord>>();
using (EventLogReader reader = new EventLogReader(new EventLogQuery("Application", PathType.LogName, "*")))
{
EventRecord ev;
while ((ev = reader.ReadEvent()) != null)
{
parsedEvents.Add(new KeyValuePair<string, EventRecord>(ev.FormatDescription(), ev));
}
}
sw.Stop();
Console.WriteLine($"Got {parsedEvents.Count} events with strings in {sw.Elapsed.TotalMilliseconds:N3}ms");
}
static void Main(string[] args)
{
ParseEvents();
ParseEventsParallel();
}
}
}
Got 20322 events with strings in 19,320.047ms
Got 20323 events with strings in 5,327.064ms
This gives a decent speedup of a factor 4. I needed to use some tricks to get faster because for some strange reason the class ProviderMetadataCachedInformation is not thread safe and uses internally a lock(this) around the Format method which defeats paralell reading.
The key trick is to open the event log in the conversion threads again and then read a bunch of events of the query there via the event bookmark Api. That way you can format the strings independently.
Update1
I have landed a change in .NET 5 which increases performance by a factor three up to 20. See https://github.com/dotnet/runtime/issues/34568.
You can also copy the EventLogReader class from .NET Core and use this one instead which will give you the same speedup.
The full saga is described by my Blog Post: https://aloiskraus.wordpress.com/2020/07/20/ms-performance-hud-analyze-eventlog-reading-performance-in-realtime/
We discussed a bit about reading the existing logs in the comments, can access the Security-tagged logs by accessing:
var eventLog = new EventLog("Security");
for (int i = 0; i < eventLog.Entries.Count; i++)
{
Console.WriteLine($"{eventLog.Entries[i].Message}");
}
This might not be the cleanest (performance-wise) way of doing it, but I doubt any other will be faster, as you yourself have already found out by trying out different techniques.
A small edit duo to Alois post: EventLogReader is not faster out of the box than EventLog, especially when using the for-loop mechanism showed in the code block above, I think EventLog is faster -- it only accesses the entries inside the loop using their index, the Entries collection is just a reference, whereas while using the EventLogReader, it will perform a query first and loop through that result, which should be slower. As commented on Alois's post: if you don't need to use the query option, just use the EventLog variant. If you do need querying, use the EventLogReader as is can query on a lower level than you could while using EventLog (only LINQ queries, which is slower ofcourse than querying in while executing the look-up).
To prevent you from having this hassle again in the future, and because you said you are running a service, I'd use the EntryWritten event of the EventLog class:
var eventLog = new EventLog("Security")
{
EnableRaisingEvents = true
};
eventLog.EntryWritten += EventLog_EntryWritten;
// .. read existing logs or do other work ..
private static void EventLog_EntryWritten(object sender, EntryWrittenEventArgs e)
{
Console.WriteLine($"received new entry: {e.Entry.Message}");
}
Note that you must set the EnableRaisingEvents to true in order for the event to fire whenever a new entry is logged. It'll also be a good practice (also, performance-wise) to start a (for example) Task, so that the system won't lock itself while queuing up the calls to your event.
This approach works fine if you want to retrieve all newly created events, if you want to retrieve newly created events but use a query (filter) for these events, you can check out the EventLogWatcher class, but in your case, when there are no constraints, I'd just use the EntryWritten event because you don't need filters and for plain old simplicity.

Handle the download of several tens of thousands files c#

I'm making a small software that download several tens of thousands files.
It's not efficient at all for now because i download each file once by once and so it's very slow, and also lot of files are less than 100ko.
Do you have any idea to improve the download speed ?
/*******************************
Worker work
/********************************/
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
listCount = _downloadList.Count;
// no GUI method !
while (TotalDownloadFile < _downloadList.Count)
{
// handle closing form during download
if (_worker.CancellationPending)
{
_mainView = null;
_wc.CancelAsync();
e.Cancel = true;
}
else if (!DownloadInProgress && TotalDownloadFile < listCount)
{
_lv = new launcherVersion(_downloadList[TotalDownloadFile]);
var fileToDownloadPath = Info.getDownloadUrl() + _lv.Path;
var saveFileToPath = Path.GetFullPath("./") + _lv.Path;
if (Tools.IsFileExist(saveFileToPath))
File.Delete(saveFileToPath); // remove file if extist
else
// create directory where the file will be created (use api this don't do anything on existing directory)
Directory.CreateDirectory(Path.GetDirectoryName(saveFileToPath));
StartDownload(fileToDownloadPath, saveFileToPath);
UpdateRemaingFile();
_currentFile = TotalDownloadFile;
}
}
}
Start Download Function
/*******************************
start the download of files
/********************************/
public void StartDownload(string fileToDownloadLink, string pathToSaveFile)
{
try
{
using (_wc = new WebClient())
{
_wc.DownloadProgressChanged += client_DownloadProgressChanged;
_wc.DownloadFileCompleted += client_DownloadFileCompleted;
_wc.DownloadFileAsync(new Uri(fileToDownloadLink), pathToSaveFile);
DownloadInProgress = true;
}
}
catch (WebException e)
{
MessageBox.Show(fileToDownloadLink);
MessageBox.Show(e.ToString());
_worker.CancelAsync();
Application.Exit();
}
}
Expanding upon my comment. You could potentially use multi-threading and concurrency to download entire batches at once. You'd have to put some though into ensuring each thread completes successfully and ensure that files don't get downloaded twice. You would have to secure your centralized lists using something like lock.
I would personally implement 3 separate lists: ReadyToDownload, DownloadInProgress, and DownloadComplete.
ReadyToDownload would contain all objects that need to be downloaded. DownloadInProgress would contain both the item being downloaded and the Task handling the download. DownloadComplete would hold all objects that were downloaded and reference the Task that performed the download.
Each Task would hypothetically work better as an instance of a custom object. That object would take in a reference to each of the lists, and it would handle updating the lists once it work either completes or fails. In the event of a failure, you could either add a forth list to house the failed items, or reinsert them into the ReadyToDownload list.

uwp c# async method waiting data not completely loaded exception

I will try to tell my problem in as simple words as possible.
In my UWP app, I am loading the data async wise on my Mainpage.xaml.cs`
public MainPage()
{
this.InitializeComponent();
LoadVideoLibrary();
}
private async void LoadVideoLibrary()
{
FoldersData = new List<FolderData>();
var folders = (await Windows.Storage.StorageLibrary.GetLibraryAsync
(Windows.Storage.KnownLibraryId.Videos)).Folders;
foreach (var folder in folders)
{
var files = (await folder.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByDate)).ToList();
FoldersData.Add(new FolderData { files = files, foldername = folder.DisplayName, folderid = folder.FolderRelativeId });
}
}
so this is the code where I am loading up a List of FolderData objects.
There in my other page Library.xaml.cs I am using that data to load up my gridview with binding data.
protected override void OnNavigatedTo(NavigationEventArgs e)
{
try
{
LoadLibraryMenuGrid();
}
catch { }
}
private async void LoadLibraryMenuGrid()
{
MenuGridItems = new ObservableCollection<MenuItemModel>();
var data = MainPage.FoldersData;
foreach (var folder in data)
{
var image = new BitmapImage();
if (folder.files.Count == 0)
{
image.UriSource = new Uri("ms-appx:///Assets/StoreLogo.png");
}
else
{
for (int i = 0; i < folder.files.Count; i++)
{
var thumb = (await folder.files[i].GetThumbnailAsync(Windows.Storage.FileProperties.ThumbnailMode.VideosView));
if (thumb != null) { await image.SetSourceAsync(thumb); break; }
}
}
MenuGridItems.Add(new MenuItemModel
{
numberofvideos = folder.files.Count.ToString(),
folder = folder.foldername,
folderid = folder.folderid,
image = image
});
}
GridHeader = "Library";
}
the problem I am facing is that when i launch my application, wait for a few seconds and then i navigate to my library page, all data loads up properly.
but when i try to navigate to library page instantly after launching the app, it gives an exception that
"collection was modified so it cannot be iterated"
I used the breakpoint and i came to know that if i give it a few seconds the List Folder Data is already loaded properly asyncornously, but when i dnt give it a few seconds, that async method is on half way of loading the data so it causes exception, how can i handle this async situation? thanks
What you need is a way to wait for data to arrive. How you fit that in with the rest of the application (e.g. MVVM or not) is a different story, and not important right now. Don't overcomplicate things. For example, you only need an ObservableCollection if you expect the data to change while the user it looking at it.
Anyway, you need to wait. So how do you wait for that data to arrive?
Use a static class that can be reached from everywhere. In there put a method to get your data. Make sure it returns a task that you cache for future calls. For example:
internal class Data { /* whatever */ }
internal static class DataLoader
{
private static Task<Data> loaderTask;
public static Task<Data> LoadDataAsync(bool refresh = false)
{
if (refresh || loaderTask == null)
{
loaderTask = LoadDataCoreAsync();
}
return loaderTask;
}
private static async Task<Data> LoadDataCoreAsync()
{
// your actual logic goes here
}
}
With this, you can start the download as soon as you start the application.
await DataLoader.LoadDataAsync();
When you need the data in that other screen, just call that method again. It will not download the data again (unless you set refresh is true), but will simply wait for the work that you started earlier to finish, if it is not finished yet.
I get that you don't have enough experience.There are multiple issues and no solution the way you are loading the data.
What you need is a Service that can give you ObservableCollection of FolderData. I think MVVM might be out of bounds at this instance unless you are willing to spend a few hours on it. Though MVVM will make things lot easier in this instance.
The main issue at hand is this
You are using foreach to iterate the folders and the FolderData list. Foreach cannot continue if the underlying collection changes.
Firstly you need to start using a for loop as opposed to foreach. 2ndly add a state which denotes whether loading has finished or not. Finally use observable data source. In my early days I used to create static properties in App.xaml.cs and I used to use them to share / observe other data.

Multithreaded access to singleton class

I'm having a hard time wrapping my head around accessing a singleton class with multiple threads.
This article has given me a nice starting point to get my singleton thread safe: http://csharpindepth.com/Articles/General/Singleton.aspx
My singleton class is supposed to treat a group of files as a single unity of data, but process them in a parallel fashion.
I store information of each file in a dictionary and return to the calling thread a unique key (which will be created using a DateTime and a random number) so that each thread can later refer to its own file.
public string AddFileForProcessing(FileForProcessing file)
{
var id = CreateUniqueFileId();
var resultFile = CreateResultFileFor(file);
//These collections are written here and only read elsewhere
_files.Add(id, file);
_results.Add(id, resultFile)
return id;
}
Then later threads call methods passing this id.
public void WriteProcessResultToProperFile(string id, string[] processingResult)
{
//locate the proper file in dictionary using id and then write information...
File.AppendAllLines(_results[key].FileName, processingResult);
}
Those methods will be accessed inside a class that:
a) Responds to a FileWatcher's Created event and creates threads that call AddFileForProcessing:
public void ProcessIncomingFile(object sender, EventArgs e)
{
var file = ((FileProcessingEventArg)e).File;
ThreadPool.QueueUserWorkItem(
item =>
{
ProcessFile(file);
});
}
b) Inside ProcessFile, I add the file to the dictionary and start processing.
private void ProcessFile(FileForProcessing file)
{
var key = filesManager.AddFileForProcessing(file);
var records = filesManager.GetRecordsCollection(key);
for (var i = 0; i < records.Count; i++)
{
//Do my processing here
filesManager.WriteProcessResultToProperFile(key, processingResult);
}
}
Now I don't get what happens when two threads call these methods, given they're both using the same instance.
Each thread will call AddFileForProcessing and WriteProcessResultToProperFile with a different parameter. Does that make them two different calls?
Since it will operate on a file that will be uniquely identified by the id that belongs to a single thread (i.e.. no file will suffer from multiple accesses), can I leave this method as is or do I still have to "lock" my method?
Yes, as long as you only read from the shared dictionary all should be fine. And you can process the files in parallel as long as they are different files, as you correctly mention.
The documentation explains:
A Dictionary<TKey, TValue> can support multiple readers concurrently, as long as the collection is not modified.
So, you can't do anything in parallel if anyone can call AddFileForProcessing (without lock). But with calls only to WriteProcessResultToProperFile, it will be fine. This implies that if you want to call AddFileForProcessing in parallel, then you need locks in both methods (in fact: all parts of code that will touch this dictionary).

How to minimize Logging impact

There's already a question about this issue, but it's not telling me what I need to know:
Let's assume I have a web application, and there's a lot of logging on every roundtrip. I don't want to open a debate about why there's so much logging, or how can I do less loggin operations. I want to know what possibilities I have in order to make this logging issue performant and clean.
So far, I've implemented declarative (attribute based) and imperative logging, which seems to be a cool and clean way of doing it... now, what can I do about performance, assuming I can expect those logs to take more time than expected. Is it ok to open a thread and leave that work to it?
Things I would consider:
Use an efficient file format to minimise the quantity of data to be written (e.g. XML and text formats are easy to read but usually terribly inefficient - the same information can be stored in a binary format in a much smaller space). But don't spend lots of CPU time trying to pack data "optimally". Just go for a simple format that is compact but fast to write.
Test use of compression on the logs. This may not be the case with a fast SSD but in most I/O situations the overhead of compressing data is less than the I/O overhead, so compression gives a net gain (although it is a compromise - raising CPU usage to lower I/O usage).
Only log useful information. No matter how important you think everything is, it's likely you can find something to cut out.
Eliminate repeated data. e.g. Are you logging a client's IP address or domain name repeatedly? Can these be reported once for a session and then not repeated? Or can you store them in a map file and use a compact index value whenever you need to reference them? etc
Test whether buffering the logged data in RAM helps improve performance (e.g. writing a thousand 20 byte log records will mean 1,000 function calls and could cause a lot of disk seeking and other write overheads, while writing a single 20,000 byte block in one burst means only one function call and could give significant performance increase and maximise the burst rate you get out to disk). Often writing blocks in sizes like (4k, 16k, 32, 64k) of data works well as it tends to fit the disk and I/O architecture (but check your specific architecture for clues about what sizes might improve efficiency). The down side of a RAM buffer is that if there is a power outage you will lose more data. So you may have to balance performance against robustness.
(Especially if you are buffering...) Dump the information to an in-memory data structure and pass it to another thread to stream it out to disk. This will help stop your primary thread being held up by log I/O. Take care with threads though - for example, you may have to consider how you will deal with times when you are creating data faster than it can be logged for short bursts - do you need to implement a queue, etc?
Are you logging multiple streams? Can these be multiplexed into a single log to possibly reduce disk seeking and the number of open files?
Is there a hardware solution that will give a large bang for your buck? e.g. Have you used SSD or RAID disks? Will dumping the data to a different server help or hinder? It may not always make much sense to spend $10,000 of developer time making something perform better if you can spend $500 to simply upgrade the disk.
I use the code below to Log. It is a singleton that accepts Logging and puts every message into a concurrentqueue. Every two seconds it writes all that has come in to the disk. Your app is now only delayed by the time it takes to put every message in the list. It's my own code, feel free to use it.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Windows.Forms;
namespace FastLibrary
{
public enum Severity : byte
{
Info = 0,
Error = 1,
Debug = 2
}
public class Log
{
private struct LogMsg
{
public DateTime ReportedOn;
public string Message;
public Severity Seriousness;
}
// Nice and Threadsafe Singleton Instance
private static Log _instance;
public static Log File
{
get { return _instance; }
}
static Log()
{
_instance = new Log();
_instance.Message("Started");
_instance.Start("");
}
~Log()
{
Exit();
}
public static void Exit()
{
if (_instance != null)
{
_instance.Message("Stopped");
_instance.Stop();
_instance = null;
}
}
private ConcurrentQueue<LogMsg> _queue = new ConcurrentQueue<LogMsg>();
private Thread _thread;
private string _logFileName;
private volatile bool _isRunning;
public void Message(string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = Severity.Info });
}
public void Message(DateTime time, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = Severity.Info });
}
public void Message(Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = DateTime.Now, Message = msg, Seriousness = seriousness });
}
public void Message(DateTime time, Severity seriousness, string msg)
{
_queue.Enqueue(new LogMsg { ReportedOn = time, Message = msg, Seriousness = seriousness });
}
private void Start(string fileName = "", bool oneLogPerProcess = false)
{
_isRunning = true;
// Unique FileName with date in it. And ProcessId so the same process running twice will log to different files
string lp = oneLogPerProcess ? "_" + System.Diagnostics.Process.GetCurrentProcess().Id : "";
_logFileName = fileName == ""
? DateTime.Now.Year.ToString("0000") + DateTime.Now.Month.ToString("00") +
DateTime.Now.Day.ToString("00") + lp + "_" +
System.IO.Path.GetFileNameWithoutExtension(Application.ExecutablePath) + ".log"
: fileName;
_thread = new Thread(LogProcessor);
_thread.IsBackground = true;
_thread.Start();
}
public void Flush()
{
EmptyQueue();
}
private void EmptyQueue()
{
while (_queue.Any())
{
var strList = new List<string>();
//
try
{
// Block concurrent writing to file due to flush commands from other context
lock (_queue)
{
LogMsg l;
while (_queue.TryDequeue(out l)) strList.Add(l.ReportedOn.ToLongTimeString() + "|" + l.Seriousness + "|" + l.Message);
if (strList.Count > 0)
{
System.IO.File.AppendAllLines(_logFileName, strList);
strList.Clear();
}
}
}
catch
{
//ignore errors on errorlogging ;-)
}
}
}
public void LogProcessor()
{
while (_isRunning)
{
EmptyQueue();
// Sleep while running so we write in efficient blocks
if (_isRunning) Thread.Sleep(2000);
else break;
}
}
private void Stop()
{
// This is never called in the singleton.
// But we made it a background thread so all will be killed anyway
_isRunning = false;
if (_thread != null)
{
_thread.Join(5000);
_thread.Abort();
_thread = null;
}
}
}
}
Check if the logger is debug enabled before calling logger.debug, this means your code does not have to evaluate the message string when debug is turned off.
if (_logger.IsDebugEnabled) _logger.Debug($"slow old string {this.foo} {this.bar}");

Categories

Resources