NLog filter by LogLevel as MappedDiagnosticsContext - c#

Based on the answer here: How Thread-Safe is NLog? I have created a Logger and added two MappedDiagnosticsContext to NLog:
NLog.MappedDiagnosticsContext.Set("Module", string.Format("{0}.{1}", module.ComputerName, module.ModuleType));
NLog.MappedDiagnosticsContext.Set("ModuleCoreLogLevel", string.Format("LogLevel.{0}", module.CoreLogLevel));
I can successfully use the "Module" Context in the NLog config (programmatically) to generate the file name the logger should log to:
${{when:when=length('${{mdc:Module}}') == 0:inner=UNSPECIFIED}}${{when:when=length('${{mdc:Module}}') > 0:inner=${{mdc:Module}}}}.txt
This logs e.g. all messages with a "Module" context of "Test" to a filename called Module.txt
I now want to be able to set the LogLevel for the different Modules using this way, and only Log messages which correspond to this LogLevel (or higher)
I am trying to do this through filters, this means, on the LoggingRule I am trying to add a filter:
rule92.Filters.Add(new ConditionBasedFilter { Condition = "(level < '${mdc:ModuleCoreLogLevel}')", Action = FilterResult.IgnoreFinal });
This however does not seem to filter messages.
If I have for example a message which is emitted using Logger.Trace(), and the LogLevel on "ModuleCoreLogLevel" is set to LogLevel.Debug, I can still the Message in the resulting LogFile.

I solved this by doing this for each level:
rule92.Filters.Add(new ConditionBasedFilter { Condition = "(equals('${mdc:ModuleCoreLogLevel}', 'LogLevel.Trace') and level < LogLevel.Trace)", Action = FilterResult.IgnoreFinal });

Related

How to get Serilog to respect ILoggingBuilder SetMiniumLevel

I want to set the minimum log level programatically at runtime on a set of loggers. I am using Microsoft.Extensions.Logging and as well as Serilog (with Serilog.Extensions.Logging).
It seems that Serilog does not respect the log level set on the ILoggingBuilder:
var level = LogLevel.Debug;
Log.Logger = new LoggerConfiguration().WriteTo
.Console(outputTemplate: "[Serilog {SourceContext} {Level:w4}] {Message:lj}{Exception}{NewLine}")
.CreateLogger();
var loggerFactory = LoggerFactory.Create(builder =>
builder.SetMinimumLevel(level)
.AddSerilog()
.AddSimpleConsole(options => { options.SingleLine = true;})
);
var logger = loggerFactory.CreateLogger("LoggingTest");
logger.LogInformation("Info Message");
logger.LogDebug("Debug Message");
Produces:
[Serilog LoggingTest info] Info Message
info: LoggingTest[0] Info Message
dbug: LoggingTest[0] Debug Message
I could sync these by using .MinimumLevel.ControlledBy(levelSwitch) on Log.Logger a long with a mapping from Mircosoft to Serilog Log Levels, but I would then need to change both where ever one is set which isn't ideal.
Is there a way to get Serilog to respect the minimum log level set on the builder?

Why is logging to a log4net.ILog appending to multiple logs?

I'm developing a plugin for a third-party application, and for each 'run' of this plugin I want an exclusive log file.
I've built the following class.
public class LogFileRepository
{
private readonly Common.Configuration.Settings _configSettings;
private const string InstanceName = "AutomationPlugin.Logging";
private readonly ILoggerRepository _repository;
public LogFileRepository (Common.Configuration.Settings configSettings)
{
_configSettings = configSettings;
var repositoryName = $"{InstanceName}.Repository";
_repository = LoggerManager.CreateRepository(repositoryName);
}
public ILog GetLog(string name)
{
var logger = LogManager.Exists(_repository.Name, name);
if (logger != null)
{
return logger;
}
var filter = new LevelMatchFilter {LevelToMatch = Level.All};
filter.ActivateOptions();
var appender = new RollingFileAppender
{
AppendToFile = false,
DatePattern = "yyyy-MM-dd",
File = String.Format(_configSettings.Paths.LogFileTemplate, name),
ImmediateFlush = true,
Layout = new PatternLayout("%n%date{ABSOLUTE} | %-7p | %m"),
LockingModel = new FileAppender.MinimalLock(),
MaxSizeRollBackups = 1,
Name = $"{InstanceName}.{name}.Appender",
PreserveLogFileNameExtension = false,
RollingStyle = RollingFileAppender.RollingMode.Once
};
appender.AddFilter(filter);
appender.ActivateOptions();
BasicConfigurator.Configure(_repository, appender);
return LogManager.GetLogger(_repository.Name, name);
}
}
What I intended this function to do is for the GetLog method to return a log file (with the specified name) if the LogManager already has one; if there isn't an existing log file then it should instantiate and return it.
This does happen. On the first run of the plugin a log file is created and written to; on a second run of the plugin a new log file is created and written to, but all messages are also written to the first log file. And on a third run all messages are written to the two existing log files as well as the new third log file.
Why? Is there something in the RollingFileAppender that I've seemingly misunderstood/misconfigured? I want an exclusive log file for each name parameter.
Assuming you've created _repository using LogManager.CreateRepository(), this actually creates a Hierarchy, and when you configure this with your new appender via BasicConfigurator.Configure(_repository, appender); this adds the appender to the Hierarchy's Root appender collection.
All loggers then created from the repository are child loggers of the "Root" and are configured to be "additive" in that they append to all appenders defined directly against them, and any of their parent loggers, all the way up to the Root. In your case the loggers themselves have no appenders of their own, so are just picking up appenders from the Root, which in your case contains all the appenders. As a result all messages get logged to every file.
What you want to do is to attach the appender to its specific logger, and disable additivity so that it doesn't then log to appenders higher in the hierarchy. There doesn't appear to be a "nice" way to do this, but the following worked in my testing:
...
appender.AddFilter(filter);
appender.ActivateOptions();
// Add the appender directly to the logger and prevent it picking up parent appenders
if (LoggerManager.GetLogger(_repository.Name, name) is Logger loggerImpl)
{
loggerImpl.Additivity = false;
loggerImpl.AddAppender(appender);
}
BasicConfigurator.Configure(_repository, appender);
return LogManager.GetLogger(_repository.Name, name);

Specflow BeforeTestRun Logging

[BeforeFeature]
public static void BeforeFeature()
{
featureTitle = $"{FeatureContext.Current.FeatureInfo.Title}";
featureRollFileAppender = new RollingFileAppender
{
AppendToFile = true,
StaticLogFileName = true,
Threshold = Level.All,
Name = "FeatureAppender",
File = "test.log",
Layout = new PatternLayout("%date %m%newline%exception"),
};
featureRollFileAppender.ActivateOptions();
log.Info("test");
}
I am attempting to use log4net to output a simple string, however, once the file has been generated, it does not contain any data.
No errors are thrown and the test does complete successfully.
It turns out that the previously selected RollingFileAppender was still open and I needed to select another RollingFileAppender. This is one of the issues when using multiple log files. Once this was resolved, the Info() method would output to my desired log file.
I was able to resolve my issue by adding the following code:
BasicConfigurator.Configure(nameRunRollFileAppender);
log = LogManager.GetLogger(typeof(Tracer));
log.Info("Output some data");

How can I stop NLog from double logging

I'm writing a WEB API call for NLog, so remote apps can log to my logging table.
In my controller I have (hard-coded for now as a sanity check):
NLogger.LogError("Some Error Message", "An exception", 5, "A computer name");
Then my static LogError method looks like this (I tried LogEventInfo() too):
public static void LogError(string msg, string ex, int appid, string machineName)
{
LogEventInfo logEvent = new LogEventInfo(LogLevel.Error, "Api Logger", "Another test msg");
logEvent.Properties["myMsg"] = msg;
logEvent.Properties["myEx"] = ex;
logEvent.Properties["myAppId"] = appid;
logEvent.Properties["myMachineName"] = machineName;
NLogManager.Instance.Log(logEvent);
}
Lastly, this is my code first config for that rule (there's 2 others with different db targets):
private static void ConfigureApiLog()
{
var dbApiErrorTarget = new DatabaseTarget
{
ConnectionString = ConnectionFactory.GetSqlConnection().ConnectionString,
CommandText = "usp_LogError",
CommandType = CommandType.StoredProcedure
};
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#level",
new global::NLog.Layouts.SimpleLayout("${level}")));
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#logger",
new global::NLog.Layouts.SimpleLayout("${logger}")));
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#message",
new global::NLog.Layouts.SimpleLayout("${event-properties:item=myMsg}")));
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#exception",
new global::NLog.Layouts.SimpleLayout("${event-properties:item=myEx}")));
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#AppId",
new global::NLog.Layouts.SimpleLayout("${event-properties:item=myAppId}")));
dbApiErrorTarget.Parameters.Add(new DatabaseParameterInfo("#MachineName",
new global::NLog.Layouts.SimpleLayout("${event-properties:item=myMachineName}")));
Config.AddTarget("database", dbApiErrorTarget);
Config.LoggingRules.Add(new LoggingRule("*", LogLevel.Error, LogLevel.Fatal, dbApiErrorTarget));
}
I expect 1 log per log call to the logger instance, but I'm getting two and I'm not exactly sure why:
Id Date Level Logger Message Exception AppId MachineName
1 2017-03-03 22:43:20.557 Error Api Logger Another test msg 0 mylocalmachine
2 2017-03-03 22:43:20.603 Error Api Logger Some Error Message An exception 5 A computer name
AppId 0 is my Api, 5 is some remote app, hard coded at this point as POC.
Might be that it's Friday, but I can't seem to figure out what's wrong with the code. Any help would be appreciated!
I had this problem and found the problem, for my specific case. Problem was double entries everywhere on every target.
I had a .NET Core web application which used the UseNLog() inside the Program.cs IWebHostBuilder. This performs AddNLog() internally.
Then in the startup I manually did loggerInstance.AddNLog() which caused the dual insertion.
The latter must be removed since UseNLog is a far better and "sooner" time to enable NLog
Hope it helps!
Rules in NLog can be target specific, or it can write to all targets. The default rule(s) won't have any targets specified, so they will write to any you create. You're also adding your own rule, which writes specifically to your target and no others.
Thus, the double-logging.
You can remove the default rules to resolve the issue.
Config.LoggingRules.Clear();
before you add your rule.

Different connection string for output or trigger

Here i have a webjob function using servicebus triggers and outputs. I'd like to set a different configuration for output and input.
public static void OnPush(
[ServiceBusTrigger("%PushProcessor.InputTopicName%", "%PushProcessor.InputTopicSubscriptionName%", AccessRights.Listen)]
BrokeredMessage message,
[ServiceBus("%PushProcessor.OutputTopicName%", AccessRights.Send)]
out BrokeredMessage output
)
I see in latest api that one can control the job host with service bus extensions.
JobHostConfiguration config = new JobHostConfiguration
{
StorageConnectionString = ConfigHelpers.GetConfigValue("AzureWebJobsStorage"),
DashboardConnectionString = ConfigHelpers.GetConfigValue("AzureWebJobsDashboard"),
NameResolver = new ByAppSettingsNameResolver()
};
config.UseServiceBus(new ServiceBusConfiguration
{
MessageOptions = new OnMessageOptions {
MaxConcurrentCalls = 2,
AutoRenewTimeout = TimeSpan.FromMinutes(1),
AutoComplete = true,
},
ConnectionString = ConfigHelpers.GetConfigValue("InputServiceBusConnectionString"),
});
Unfortunately i see no control for the connection string for the output. I'd like a different connection string (different namespace/access rights) to be used for inputs versus outputs.
Perhaps the api can support registering named jobhostconfigurations to a jobhost, and referring to that name in the attributes for the trigger/output. Anyways if there is a way to do this let me know.
Yes, also in the latest beta1 release you'll see that there is now a ServiceBusAccountAttribute that you can apply along with the ServiceBusTrigger/ServiceBus attributes. For example:
public static void Test(
[ServiceBusTriggerAttribute("test"),
ServiceBusAccount("testaccount")] BrokeredMessage message)
{
. . .
}
We've done the same for all the other attribute types (Queue/Blob/Table) via StorageAccountAttribute. These account attributes can be applied at the class/method/parameter level. Please give this new feature a try and let us know how it works for you. Also, see the release notes for more details.

Categories

Resources