Writing each json message to unique file with NLog - c#

I want to log each json message received from the network into a unique file.
At first I thought to do File.WriteAllText($"{Guid.NewGuid()}.json", jsonMsg);. But, I need to be able to archive and delete old log files. So, I have to watch the folder when the application starts up and do all the things that NLog already knows how to do.
With NLog, I can create and add a new Target for each message from code. But I'm not sure how to remove those targets after the message has been written. Otherwise, it will create memory leak.
So, should I just stick with my first idea and just implement the logic to archive and delete files, or is there a way to do this with NLog?

I recommend that you use a single filetarget for doing the writing, but avoid using MaxArchiveFiles for archive-cleanup:
var myguid = Guid.NewGuid();
var logEvent = NLog.LogEventInfo.Create(NLog.LogLevel.Info, null, jsonMsg);
logEvent.Properties["msguid"] = myguid;
logger.Log(logEvent);
And then use ${event-properties} in the FileName-option:
<target type="file" name="fileNetworkMessages"
fileName="messages/Archive/Output-${event-properties:myguid}.json"
layout="${message}" keepFileOpen="false" />
When including Guid in the filename, then you should avoid using MaxArchiveFiles, because it will introduce a performance-hit for every new file created, and cleanup will not work (HEX-letters will disturb file-wildcard).
NLog FileTarget MaxArchiveFiles has an overhead when rolling to a new file, where it scans all existing files to see if cleanup is necessary. This works fine when only rolling once every hour/day. But when using Guid in the filename, then NLog FileTarget will trigger a cleanup-check for every new file created. This will introduce a performance overhead.

Related

How to remove a file from DICOMDIR using fo-dicom

I have created the the DICOMDIR using fo-dicom as follows.
var dicomDir = new DicomDirectory();
var dicomFile = Dicom.DicomFile.Open(dicomFilePath);
dicomDir.AddFile(dicomFile, filePathForDicomDir);
DicomWriteOptions options = DicomWriteOptions.Default;
options.ExplicitLengthSequenceItems = false;
dicomDir.Save(dicomDirPath, options);
I have to remove the already added file from the existing DICOMDIR, but couldn't find a method in the DicomDirectory.cs class.
How do I successfully remove a file reference from a DICOMDIR using fo-dicom, rather than recreating the entire DICOMDIR again?
Have you tried the Delete method?
dicomDir.File.Delete();
Your request is rather unusual. fo-dicom does not provide a method in DicomDirectory to remove an entry from an existing DICOMDIR.
The filesets in DICOM are meant to be immutable. Such file-sets are a collection of files that are for example burnt on a CD and then are indexed with the DICOMDIR file. So when should you create a DICOMDIR? When you have a set of files to burn on CD and for those you create the DICOMDIR file.
Also internally the DicomDirectoryRecords are list of entries with offsets that reference each other. So removing an entry means internally to completely rebuild the records-structure. So there is not a big advantage of removing an entry over recreating the DICOMDIR.
What is the use case where you need to remove an entry and recreating the DICOMDIR is not possible? If there is a valid use case, then you have to create an issue in fo-dicom project on github.

NLog ReconfigExistingLoggers creating new log?

I'm using Nlog and I need to change the name of the default log to include information such as a company name. I've used this code a long time ago on an a console app and it renamed the file as expected.
I'm now trying to use the same code in a new app and it's creating a new log file instead of just renaming the current one. For example, I now have two files (2019-10.07.log and 2019-10-07_CompanyName.log). The default log will have few initial log entries and then it the remainder of the logs go into the new one.
Looking for any suggestions. I've been searching for fixes but everything points me back to the code I'm already using.
NLog v4.6.7
fileNameOnly = "CompanyName";
FileTarget defaultTarget = FindNLogTargetByName("DefaultTarget");
defaultTarget.FileName = logDirectory + string.Format("{0:yyyy-MM-dd}", DateTime.Now) + "_" + fileNameOnly + ".log";
LogManager.ReconfigExistingLoggers();
NLog doesn't support renaming an existing file. If a new file name is used, all the logs will be appended to the new file.
So for the file name you need to use System.IO.File.Move(path, pathnew) and change NLog.
Unfortunately it's a bit tricky when doing high volume logging, as NLog will recreate the old log file until the target is changed.
NLog can load settings (like Company name) from app.config or appsettings.json.
Just update your NLog.config to reference the setting. Ex.
<target type="file" name="myfile" fileName="${appsetting:CompanyName}${shortdate}.log" />
See also: https://github.com/NLog/NLog/wiki/AppSetting-Layout-Renderer (Net Framework)
See also: https://github.com/NLog/NLog/wiki/ConfigSetting-Layout-Renderer (Net Core)

Log4Net: How to handle concurrents write to the same file?

We have a big solution composed of several application. One of the application is ran very regularly( every 10minutes), and sometimes if the computer is busy, two executions may ran in parallel. (The question is not about if it's a good idea or not).
The only issue we currently have is that sometimes, the two overlapping process are both having a ILogger for the same file, we have an error from log4net, indicating that it cannot access to the file(file already used by another process or something like that).
Here is how we have the log configured:
RollingFileAppender appender = new RollingFileAppender
{
Name = appenderName,
File = fileName,
AppendToFile = true,
MaxSizeRollBackups = 10,
MaximumFileSize = "10MB",
RollingStyle = RollingFileAppender.RollingMode.Size,
StaticLogFileName = false,
LockingModel = new FileAppender.MinimalLock(),
ImmediateFlush = true
};
What would be the best way to handle this issue? We cannot have one file per execution.
EDIT
Here is the error I get:
log4net:ERROR [RollingFileAppender] Unable to acquire lock on file XXXX. The process cannot acces the file because it is being used by another process.
When you have multiple files writing to the same file, you can use the FileAppender.InterProcessLock model. This will lock and unlock the file based on a Mutex instead of trying to fix it with a minimal locking time.

Show every file in a directory, but don't show files that are currently being copied

I have a function that checks every file in a directory and writes a list to the console. The problem is, I don't want it to include files that are currently being copied to the directory, I only want it to show the files that are complete. How do I do that? Here is my code:
foreach (string file in Directory.EnumerateFiles("C:\folder"))
{
Console.WriteLine(file);
}
There's really no way to tell "being copied" vs "locked for writing by something". Relevant: How to check for file lock? and Can I simply 'read' a file that is in use?
If you want to simply display a list of files that are not open for writing, you can do that by attempting to open them:
foreach (string file in Directory.EnumerateFiles("C:\folder"))
{
try {
using (var file = file.Open(file, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) {
Console.WriteLine(file);
}
} catch {
// file is in use
continue;
}
}
However -- lots of caveats.
Immediately after displaying the filename (end of the using block) the file could be opened by something else
The process writing the file may have used FileShare.Read which means the call will succeed, despite it being written to.
I'm not sure what exactly you're up to here, but it sounds like two processes sharing a queue directory: one writing, one reading/processing. The biggest challenge is that writing a file takes time, and so your "reading" process ends up picking it up and trying to read it before the whole file is there, which will fail in some way depending on the sharing mode, how your apps are written, etc.
A common pattern to deal with this situation is to use an atomic file operation like Move:
Do the (slow) write/copy operation to a temporary directory that's on the same file system (very important) as the queue directory
Once complete, do a Move from the temporary directory to the queue directory.
Since move is atomic, the file will either not be there, or it will be 100% there -- there is no opportunity for the "reading" process to ever see the file while it's partially there.
Note that if you do the move across file systems, it will act the same as a copy.
There's no "current files being copied" list stored anywhere in Windows/.NET/whatever. Probably the most you could do is attempt to open each file for append and see if you get an exception. Depending on the size and location of your directory, and on the security setup, that may not be an option.
There isn't a clean way to do this, but this... works...
foreach (var file in new DirectoryInfo(#"C:\Folder").GetFiles())
{
try
{
file.OpenRead();
}
catch
{
continue;
}
Console.WriteLine(file.Name);
}

NLog - Cleaning up log files that have dynamic file names

I have an application that will be receiving messages through a queuing system, I would like to log each message to its own file, with the file name being the message id. I figured out how to accomplish this using the event-context within the filename.
Though the maxArchiveFiles setting does not have any affect, probably because I'm not archiving any files. Using this configuration is there any way I can leverage NLog to limit the number of files either by date or count?
<target name="testfile" xsi:type="File"
layout="${message}"
fileName="c:\SupportLogs\${event-context:item=MessageId}.txt"
maxArchiveFiles="50"
keepFileOpen="false"
encoding="iso-8859-2" />
NLog.Logger oLogger = NLog.LogManager.GetLogger("Test");
NLog.LogEventInfo oEvent = new NLog.LogEventInfo(NLog.LogLevel.Debug, "", "My Message");
oEvent.Properties["MessageId"] = Guid.NewGuid().ToString();
oLogger.Log(oEvent);
Unfortunately this is not possible in the NLog at the moment. You have to clean up the log files yourself.

Categories

Resources