Using .Net 4.6 I have a static Serilog helper class - I've stripped down to the essentials as follows:
public static class SerilogHelper
{
private static ILogger log;
private static ILogger CreateLogger()
{
if (log == null)
{
string levelString = SSOSettingsFileManager.SSOSettingsFileReader.ReadString(
"BizTalk.Common", "serilog.minimum-level");
SerilogLevel level = (SerilogLevel)Enum.Parse(typeof(SerilogLevel), levelString);
string conString = SSOSettingsFileManager.SSOSettingsFileReader.ReadString(
"BizTalk.Common", "serilog.connection-string");
var levelSwitch = new LoggingLevelSwitch();
levelSwitch.MinimumLevel = (Serilog.Events.LogEventLevel)level;
log = new LoggerConfiguration()
.MinimumLevel.ControlledBy(levelSwitch)
.WriteTo.MSSqlServer(connectionString: conString, tableName: "Serilog", autoCreateSqlTable: true)
.WriteTo.RollingFile("log-{Date}.txt")
.CreateLogger();
}
return log;
}
public static void WriteString(string content)
{
var logger = CreateLogger();
logger.Information(content);
}
I have the following unit test:
[TestMethod]
public void UN_TestSerilog1()
{
Common.Components.Helpers.SerilogHelper.WriteString("Simple logging");
}
I've stepped through the debugger to be sure that the "level" variable is being set correctly - it's an enum named "Debug" with value of 1.
Although the Sql Server table is created ok, I don't see any rows inserted or any log txt file.
I've also tried calling logger.Error(content) but still no output.
I've used the same helper code previously on a different site / project and it worked ok.
Where did I go wrong this time?
Serilog.Sinks.MSSqlServer is a "periodic batching" sink and by default, it waits 5 seconds before sending the logs to the database. If your test ends before the sink had a chance to write the messages to the database, they are simply lost...
You need to make sure you dispose the logger before your test runner ends, to force the sink to flush the logs to the database at that point. See Lifecycle of Loggers.
((IDisposable) logger).Dispose();
Of course, if you are sharing a static log instance across multiple tests, you can't just dispose the logger inside of a single test as that would mean the next test that runs won't have a logger to write to... In that case, you should look at your testing framework support for executing code once, before the test suite run starts, and once again, for when the a test suite run ends.
I'm guessing you are using MSTest (because of the TestMethod), so you probably want to look into AssemblyInitialize and AssemblyCleanup, which would give you the opportunity to initialize the logger for all tests, and clean up after all tests finished running...
You might be interested in other ideas for troubleshooting Serilog issues: Serilog MSSQL Sink doesn't write logs to database
Related
How do I avoid creating a loop with Serilog Sinks that I want to log.
The problem is that the base classes "MyTcpServer" and "MyTcpClient" use Serilog.
But since TcpSink also uses the same classes, sending a log entry will indefinitely loop.
How do I prevent this?
Main()
{
Serilog.Log.Logger = new LoggerConfiguration()
.WriteTo.TcpSink() //this a TcpListener/Server listening on port 1234
.WriteTo.Console()
.CreateLogger();
MyTcpServer AnotherServer = new MyTcpServer(4321);
}
public class MyTcpServer
{
///this class contains Log.Verbose|Debug|Error
private List<MyTcpClient> clients;
}
public class MyTcpClient
{
///this class contains Log.Verbose|Debug|Error
}
public class TcpServerSink : ILogEventSink
{
MyTcpServer server;
public TcpServerSink(int port = 1234)
{
server = new MyTcpServer(1234);
}
public void Emit(LogEvent logevent)
{
string str = Newtonsoft.Json.JsonConvert.Serialize(logevent);
server.Send(str);
}
}
There are only two options here
Use MyTcpServer in TcpServerSink but don't log to TcpServerSink
Don't use MyTcpServer in TcpServerSink
For the first solution make MyTcpServer dependent on ILogger rather than using static Log dependency. This way you can pass whatever logger you want or just disable logging in your sink:
server = new MyTcpServer(SilentLogger.Instance, 1234);
I personally prefer the second solution. Because you should log to Serilog sinks only events related to your application logic. TcpServerSink is not related to application logic. A common approach used in other Serilog sinks is the usage of static SelfLog that writes to someTextWriter. E.g.
SelfLog.Out = Console.Error;
And then you can use this self-log to write some info about your sink. Also instead of MyTcpServer your sink should use something like plain TcpClient. You can check a Splunk TcpSink example.
One option worth considering is to use Log.ForContext<MyTcpServer>() when logging within the TCP server:
Log.ForContext<MyTcpServer>().Information("Hello!");
and filter these messages out for the TCP sink:
// dotnet add package Serilog.Expressions
.WriteTo.Conditional(
"SourceContext not like 'MyNamespace.MyTcpServer%'",
wt => wt.TcpSink())
.WriteTo.Console()
This has the advantage of getting errors from the TCP sink through to the console, but the drawback that if you forget to use a contextual logger inside the TCP server you'll still stack overflow.
I'm building a DLL in C# that I will be consuming with several different projects - so far, I know of a WPF application and a (binary) PowerShell module. Because the core business logic needs to be shared across multiple projects, I don't want the PowerShell module itself to contain the core logic. I'd just like to reference my primary library.
I'm struggling to figure out how to implement a clean logging solution in my core DLL that will be accessible via PowerShell's WriteVerbose() method. Without this, I can provide verbose output to PowerShell about PowerShell-specific things, but I can't provide any verbose output about "waiting for HTTP request" or other features that would be in the core DLL.
Here's a simple example of what I'm trying to do:
using System;
using System.Threading;
namespace CoreApp
{
public class AppObject
{
public AppObject() {}
public int DoStuffThatTakesForever()
{
// Assume logger is a logging object - could be an existing
// library like NLog, or I could write it myself
logger.Info("Doing step 1");
Thread.Sleep(5000);
logger.Info("Doing step 2");
Thread.Sleep(5000);
logger.Info("Doing step 3");
Random r = new Random();
r.Next(0, 10);
}
}
}
////////////////////////////////////////////////////////////
// Separate VS project that references the CoreApp project
using System.Management.Automation;
using CoreApp;
namespace CoreApp.PowerShell
{
[Cmdlet(VerbsCommon.Invoke, "ThingWithAppObject"]
[OutputType(typeof(Int32))]
public class InvokeThingWithAppObject : Cmdlet
{
[Parameter(Position = 0)]
public AppObject InputObject {get; set;}
protected override void ProcessRecord()
{
// Here I want to be able to send the logging phrases,
// "Doing step 1", "Doing step 2", etc., to PowerShell's
// verbose stream (probably using Cmdlet.WriteVerbose() )
int result = InputObject.DoStuffThatTakesForever();
WriteObject(result);
}
}
}
How can I provide verbose PowerShell verbose output without tightly binding the core library with the PowerShell module?
I'm definitely open to other solutions, but here's how I ended up solving it:
In the core library, I created an ILogger interface with methods for Info, Verbose, Warn, etc. I created a DefaultLogger class that implemented that logger (by writing everything to the attached debugger), and I gave this class a static singleton instance.
In each method that I wanted logged, I added an optional ILogger parameter, and added a line to use the default logger if necessary. The method definitions now look like this:
public int DoSomething(ILogger logger = null)
{
logger = logger ?? MyAppLogger.Singleton;
// Rest of the code
Random r = new Random();
return r.Next(0, 10);
}
I had to do this for each method because the PSCmdlet.WriteVerbose() method expects to be called from the currently running cmdlet. I couldn't create a persistent class variable to hold a logger object because each time the user ran a cmdlet, the PSCmdlet object (with the WriteVerbose method I need) would change.
Finally, I went back to the PowerShell consumer project. I implemented the ILogger class in my base cmdlet class:
public class MyCmdletBase : PSCmdlet, ILogger
{
public void Verbose(string message) => WriteVerbose(message);
public void Debug(string message) => WriteDebug(message);
// etc.
}
Now it's trivial to pass the current cmdlet as an ILogger instance when calling a method from the core library:
[Cmdlet(VerbsCommon.Invoke, "ThingWithAppObject"]
[OutputType(typeof(Int32))]
public class InvokeThingWithAppObject : MyCmdletBase
{
[Parameter(Mandatory = true, Position = 0)]
public AppObject InputObject {get; set;}
protected override void ProcessRecord()
{
int result = InputObject.DoSomething(this);
WriteObject(result);
}
}
In a different project, I'll need to write some kind of "log adapter" to implement the ILogger interface and write log entries to NLog (or whatever logging library I end up with).
The only other hiccup I ran into is that WriteVerbose(), WriteDebug(), etc. cannot be called from a different thread than the main thread the cmdlet is running on. This was a significant problem, since I'm making async Web requests, but after banging my head on the wall I decided to just block and run the Web requests synchronously instead. I'll probably end up implementing both a synchronous and an async version of each Web-based function in the core library.
This approach feels a bit dirty to me, but it works brilliantly.
I am working on a custom Serilog sink, inheriting from PeriodicBatchingSink and calling my web service to write data into db, using pattern similar to Serilog.Sinks.Seq. Using this code as example https://github.com/serilog/serilog-sinks-seq/blob/dev/src/Serilog.Sinks.Seq/Sinks/Seq/SeqSink.cs I am overriding EmitBatchAsync and from there calling my web service.
public AppSink(string serverUrl, int batchSizeLimit,
TimeSpan period, long? eventBodyLimitBytes))
: base(batchSizeLimit, period)
{
...
}
protected override async Task EmitBatchAsync(IEnumerable<LogEvent> events)
{
...
var result = await _httpClient.PostAsJsonAsync(Uri, logEntriesList);
}
Trying to write some xunit tests to test actual LogEvent roundtrip, but can't figure how to wait for the Task to complete, using async and await doesn't work - logger still processes all log events asynchronously, and the test completes without waiting. Neither Log.Debug nor overridden EmitBatchAsync return anything.
This is just a sample of what I'm trying to test:
[Fact]
public void Test_LogMessages()
{
InitAppSink();
Log.Logger = new LoggerConfiguration().ReadFrom.AppSettings()
.WriteTo.Sink(testSink)
.MinimumLevel.ControlledBy(_levelSwitch)
.CreateLogger();
Log.Information("Information Test Log Entry");
Log.Debug("Debug Test Log Entry");
}
Sample tests on the Serilog page are not much help, even comments there say "// Some very, very approximate tests here :)" , or maybe I'm missing something.
Or maybe it's the fact that I'm new to both Serilog and async testing.
What would be the best way to unit test Log.Debug("msg") in this case?
One option that may work for you is to dispose the sink and/or logger to flush any pending batches:
[Fact]
public void Test_LogMessages()
{
InitAppSink();
var logger = new LoggerConfiguration().ReadFrom.AppSettings()
.WriteTo.Sink(testSink)
.MinimumLevel.ControlledBy(_levelSwitch)
.CreateLogger();
logger.Information("Information Test Log Entry");
logger.Debug("Debug Test Log Entry");
((IDisposable)logger).Dispose();
}
The sink directly implements IDisposable, so:
testSink.Dispose();
...would probably achieve this too.
I have got a utility class, called ErrorLog, which basically does some basic error logging kinda stuff, recording error messages, stacktraces, etc.
So in my main c# app, I almost always chuck this piece of code, ErrorLog el = new ErrorLog() into the catch(Exception e) part, and then start calling its methods to do logging.
For example, here is 1 of the methods in ErrorLog class
public void logErrorTraceToFile(string fname, string errorMsg, string errorStackTrace)
{
//code here
}
Anyway, I am just wondering if it's a good approach to log errors in this way? It seems to me that this solution is a bit clumsy. (considering in each catch block you create el object and call its method, repeatedly.)
Also, in terms of storing error log files, where it the best / most reasonable location to save them? Currently, I just hard-coded the directory, C:\ErrorLogs\, as I am still testing a few things. But I do want to get it right before I forget.
So any ideas or suggestions?
Thanks.
Look at ELMAH This is very efficient in handling and catching errors in the application.
The errors get logged in the database.
Usually I'm using the Singleton pattern to have one application wide Logger.
public class Logger
{
protected static Logger _logger;
protected Logger()
{
// init
}
public void Log(string message)
{
// log
}
public static Logger GetLogger()
{
return _logger ?? _logger = new Logger();
}
}
As a place to store data I would use the application data or user data directory, only there you can be sure to have write access.
Edit: That's how you would use the Logger from any place in your code:
Logger.GetLogger().Log("test");
I have configured log4net in my app successfully but one thing is a little bit annoying for me.
The log file is created (empty) after my app start even if no error occurs. I would like to log file be created only after some error.
I actually found a way to do this in this thread:
http://www.l4ndash.com/Log4NetMailArchive/tabid/70/forumid/1/postid/18271/view/topic/Default.aspx
I've tested the first method and it works. Just in case that link is not longer good I'll reproduce the code here. Basically the author states that there are two ways of doing this.
First way:
Create a new locking model that only acquires a lock (and creates the file) if the appropriate threshold for that logger works.
public class MyLock : log4net.Appender.FileAppender.MinimalLock
{
public override Stream AcquireLock()
{
if (CurrentAppender.Threshold == log4net.Core.Level.Off)
return null;
return base.AcquireLock();
}
}
Now in the config file, set the threshold to start out as:
<threshold value="OFF" />
and make sure you set this new LockingModel as you model:
<lockingModel type="Namespace.MyLock" />
I'm using this with a rolling file appender.
The second method is listed at the link. I haven't tried this technique but it seems to be technically sound.
I know this is an old question but I think this can be useful for someone else.
We came across a similar situation where it was required that the application shouldn't leave an empty log file if no errors occurred.
We solved it by creating the following custom LockingModel class:
public class MinimalLockDeleteEmpty : FileAppender.MinimalLock
{
public override void ReleaseLock()
{
base.ReleaseLock();
var logFile = new FileInfo(CurrentAppender.File);
if (logFile.Exists && logFile.Length <= 0)
{
logFile.Delete();
}
}
}
It is derived from the FileAppender.MinimalLock class that will release the lock on the log file after writing each log message.
We added extra functionality that will delete the log file if it is still empty after the lock is released. It prevents the application from leaving empty error log files if the applications runs and exits without any errors.
Pros
It will still create an empty log file during the configuration phase of Log4Net, ensuring that logging is working before the rest of the app starts. However, the log file is deleted immediately.
It doesn't require you to turn off logging in your config file by setting threshold value to "OFF" and than, later on, turn on logging programmatically before writing your first log event.
Cons
This is most likely a slow method of managing your log files because the ReleaseLock method, and the check on the file length, will be called after every log event that is written to the log file. Only use it when you expect to have very few errors and it is a business requirement that the log file shouldn't exist when there are no errors.
The log files are created and deleted when empty. This might be a problem if you have other tools monitoring the log directory for file system changes. However, this was not a problem in our situation.
The following worked for me.The first call to OpenFile() occurs when the logger is configured. Subsequent calls are when actual log message is generated.
class CustomFileAppender : RollingFileAppender
{
private bool isFirstTime = true;
protected override void OpenFile(string fileName, bool append)
{
if (isFirstTime)
{
isFirstTime = false;
return;
}
base.OpenFile(fileName, append);
}
}
And in the config file, change the appender
<log4net>
<appender name="RollingFile" type="<your namespace>.CustomFileAppender">
...
</log4net>
The sequence from log4Net source is as below:
The first call to OpenFile() is because of ActivateOptions() called from FileAppender's constructor.
When log message is generated, AppenderSkeleton's DoAppend() calls PreAppendCheck()
PreAppendCheck() is overridden in TextWriterAppender, the base of FileAppender.
The overridden PreAppendCheck() calls virtual PrepareWriter if the file is not yet open.
PrepareWriter() of FileAppender calls SafeOpenFile() which inturn calls OpenFile()
The problem with that approach is that then if the file exists but is read-only, or is in a directory which doesn't exist etc, you won't find out until another error is already causing problems. You really want to be confident that logging is working before the rest of the app starts.
There may be a way of doing this anyway, but if not I suspect that this is the reason.
Another method that is quite simple is described in this message of the mailing list archive
Basically, with log4net, the log file is created when the logger is configured. To configure it to do otherwise is a bit hacky. The solution is to defer the execution of the configuration. The message above suggests doing the following when setting up the logger:
private static ILog _log = LogManager.GetLogger(typeof(Program));
public static ILog Log
{
get
{
if(!log4net.LogManager.GetRepository().Configured)
log4net.Config.XmlConfigurator.Configure(new FileInfo(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile));
return _log;
}
}
I usually configure log4net with the assembly attribute, which configures the logger automatically (thus creating the log file), and a simple getter for the log:
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
...
public static log4net.ILog Log { get { return _log; } }
private static readonly log4net.ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
But removing that and adding in the above getter with the additional logic instead solved the problem for me.
Note: in general I agree that in most cases it would be best to configure the logger and create the file (and even write to it) on application startup.
AcquireLock and ReleaseLock method worked for me, but it bothered me that the file was created/deleted that many times. Here is another similar option that shuts down the logger and deletes the empty logfile when the program completed. Just call RemoveEmptyLogFile when you are done logging.
/// <summary>
/// Sets the logging level for log4net.
/// </summary>
private static void RemoveEmptyLogFile()
{
//Get the logfilename before we shut it down
log4net.Appender.FileAppender rootAppender = (log4net.Appender.FileAppender)((log4net.Repository.Hierarchy.Hierarchy)log4net.LogManager.GetRepository()).Root.Appenders[0];
string filename = rootAppender.File;
//Shut down all of the repositories to release lock on logfile
log4net.Repository.ILoggerRepository[] repositories = log4net.LogManager.GetAllRepositories();
foreach (log4net.Repository.ILoggerRepository repository in repositories)
{
repository.Shutdown();
}
//Delete log file if it's empty
var f = new FileInfo(filename);
if (f.Exists && f.Length <= 0)
{
f.Delete();
}
} // RemoveEmptyLogFile
private static ILog _log = LogManager.GetLogger(typeof(Program));
public static ILog Log
{
get
{
if(!log4net.LogManager.GetRepository().Configured)
log4net.Config.XmlConfigurator.Configure(new FileInfo(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile));
return _log;
}
}