I'll explain my use case, but I think it can be generalized as it is mainly a code design question.
So I use serilog for logging in my application. What I want is to check whenever the application actually log something that, if the level of the log is above a maximum one, it force exiting the application (in my case any Error or Fatal logging trigger this) :
public static void CheckForceExiting(LogEventLevel level, LogEventLevel maxLevel = MAX_LEVEL_BEFORE_EXIT)
{
//level is at most the maximum one, so keep the application running.
if (level <= maxLevel)
return;
//level is above maximum one, so Exit the App.
//Disable UI interaction
(Application.Current.MainWindow.DataContext as Main_VM).IsEnabled = false;
string msg = "An exception was caught !", title = "/!\\ERROR/!\\ ";
if (level > LogEventLevel.Error)
{
msg = "Unhandled exception occured !";
title = "/!\\FATAL ERROR/!\\ ";
}
msg += " Closing SDC now.\n\nYou can find the log in ./Logs folder if needed.";
title += HelperUtils.AssemblyName.Name + ", Ver: " + HelperUtils.AssemblyName.Version.ToString();
MessageBox.Show(msg, title, MessageBoxButton.OK, MessageBoxImage.Error);
OnCloseLogging();
Environment.Exit(1);
}
My first, working, attempt was to code a full wrapper of serilog.Log own static class, that, for each logging method on this class, add a call to my above method.
But it doesn't satisfy me to just somewhat copy/paste serilog.Log class to just add in any of their logging method a call to CheckForceExiting().
So, instead, I preferred to use my own custom serilog Sink class, that will just call CheckForceExiting() as its "logging" action :
/// <summary>
/// Custom <see cref="ILogEventSink"/> Sink class
/// that perform/call <see cref="LoggingUtils.CheckForceExiting(LogEventLevel, LogEventLevel)"/> check on each logging level
/// in order to force exiting the application if level is too high (default is Error or Fatal).
/// </summary>
class CheckLevelSink : ILogEventSink
{
/// <summary>
/// Default constructor
/// </summary>
public CheckLevelSink()
{
}
/// <summary>
/// Perform custom "logging".
/// Here we are just calling <see cref="LoggingUtils.CheckForceExiting(LogEventLevel, LogEventLevel)"/>
/// to force exiting the application if <paramref name="logEvent"/> level is above maximum allowed level
/// </summary>
/// <param name="logEvent">The logEvent to "log". Here onluy check its level to force exiting the application or not.</param>
public void Emit(LogEvent logEvent)
{
LoggingUtils.CheckForceExiting(logEvent.Level);
}
}
And add this sink to the "chained logging" in serilog configuration :
/// <summary>
/// Creating a generic host to allow the creation and use of a Serilog logger
/// </summary>
/// <returns>The generic host created</returns>
public static IHost CreateHostBuilder()
{
return Host.CreateDefaultBuilder()
.UseSerilog((host, loggerConfig) =>
{
loggerConfig.WriteTo.File("Logs/log.txt", rollingInterval: RollingInterval.Day)
.Enrich.FromLogContext()
.MinimumLevel.Information();
loggerConfig.WriteTo.CheckLevel();
#if DEBUG
loggerConfig.WriteTo.Debug()
.MinimumLevel.Debug();
#endif
})
.ConfigureServices(services =>
{
})
.Build();
}
/// <summary>
/// Extension method pattern to allow an easy way to add the custom <see cref="CheckLevelSink"/> Sink in a <see cref="LoggerConfiguration"/>
/// </summary>
/// <param name="sinkConfiguration">The Sink configuration in which to "add" our custom <see cref="CheckLevelSink"/></param>
/// <param name="restrictedToMinimumLevel">The minimum <see cref="LogEventLevel"/> log level from which we allow logging in the <see cref="CheckLevelSink"/> Sink.</param>
/// <returns>The <see cref="LoggerConfiguration"/> configuration of the logger that have "added" the custom <see cref="CheckLevelSink"/> Sink.</returns>
public static LoggerConfiguration CheckLevel(
this LoggerSinkConfiguration sinkConfiguration,
LogEventLevel restrictedToMinimumLevel = LogEventLevel.Error)
{
return sinkConfiguration.Sink(new CheckLevelSink(), restrictedToMinimumLevel);
}
OK, I found it better, but still seems a little overkill no ?
So, in a more generalized way, whenever you call an external API method and you want to automatically add your custom logic on top (before and/or after) of the API own logic, what is the best practice ?
(Note: I envisaged to use inheritance to serilog File.Write method, but it is a sealed class. So I'm not sure inheritance is a generalized applicable solution).
Thanks.
Related
I'm currently developing a Web API used by our mobile application. If an API-call is made that needs to send an email, the email is added to a queue in Azure Storage. For handling the queue (reading the queue mails and actually sending them) I thought the best solution would be creating a Hosted Service that will do this in the background.
For implementing this I followed the instructions from the following documentation: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1
I created a class that implements the abstract BackgroundService-class from .NET Core 2.1 for this. It looks like this:
namespace Api.BackgroundServices
{
/// <summary>
/// Mail queue service.
/// This handles the queued mails one by one.
/// </summary>
/// <seealso cref="Microsoft.Extensions.Hosting.BackgroundService" />
public class MailQueueService : BackgroundService
{
private readonly IServiceScopeFactory serviceScopeFactory;
/// <summary>
/// Initializes a new instance of the <see cref="MailQueueService"/> class.
/// </summary>
/// <param name="serviceScopeFactory">The service scope factory.</param>
public MailQueueService(IServiceScopeFactory serviceScopeFactory)
{
this.serviceScopeFactory = serviceScopeFactory;
}
/// <summary>
/// This method is called when the <see cref="T:Microsoft.Extensions.Hosting.IHostedService" /> starts. The implementation should return a task that represents
/// the lifetime of the long running operation(s) being performed.
/// </summary>
/// <param name="stoppingToken">Triggered when <see cref="M:Microsoft.Extensions.Hosting.IHostedService.StopAsync(System.Threading.CancellationToken)" /> is called.</param>
/// <returns>A <see cref="T:System.Threading.Tasks.Task" /> that represents the long running operations.</returns>
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
await HandleMailQueueAsync();
//await Task.Delay(3000, stoppingToken);
}
}
private async Task HandleMailQueueAsync()
{
using (IServiceScope serviceScope = serviceScopeFactory.CreateScope())
{
TelemetryClient telemetryClient = serviceScope.ServiceProvider.GetService<TelemetryClient>();
try
{
IMailHandler mailHandler = serviceScope.ServiceProvider.GetService<IMailHandler>();
await mailHandler.HandleMailQueueAsync();
}
catch (Exception exception)
{
telemetryClient.TrackException(exception);
}
}
}
}
}
After registering it by calling
services.AddHostedService<MailQueueService>();
in the Startup.cs, it will successfully handle the mail queue, but all other calls to the WebAPI take almost ten times as long. Only if I comment out the Task.Delay() part in my implementation of the BackgroundService, the performance goes back to an acceptable level.
However this seems more like a workaround than a real solution for my problem. Am I doing something else wrong that makes the performance tank like this?
Have a WPF client that is using the latest CefSharp package to host web applications. Since we have multiple web apps we have multiple Views each with its own instance of a browser/BrowserSubProcess.
Say, for lack of a better example, I simply go into task manager and Kill one of the SubProcess.exe's. Is there an event we can tap into or otherwise be notified?
One thought would be to hook into the process by querying via some kind of pinvoke but that is a can of worms I would rather not open.
Thanks to #amaitland for pointing me in the right direction. Its a bit of a needle in a haystack but it is there.
For anyone interested, you have to implement IRequestHandler that is referenced in his comment above. You can either
do it from scratch,
use their fully implemented example at Example RequestHandler,
or do something in between using DefaultRequestHandler (DefaultRequestHandler Override Example).
So if we use DefaultRequestHandler we can do something like this for just the terminated event:
/// <summary>
/// Handle events related to browser requests.
/// </summary>
public class RequestHandler : DefaultRequestHandler
{
/// <summary>
/// Called when the render process terminates unexpectedly.
/// </summary>
/// <param name="browserControl">The ChromiumWebBrowser control</param>
/// <param name="browser">the browser object</param>
/// <param name="status">indicates how the process terminated.</param>
/// <remarks>
/// Remember that <see cref="browserControl"/> is likely on a different thread so care should be used
/// when accessing its properties.
/// </remarks>
public override void OnRenderProcessTerminated(IWebBrowser browserControl, IBrowser browser, CefTerminationStatus status)
{
switch (status)
{
case CefTerminationStatus.AbnormalTermination:
Log.Error("Browser terminated abnormally.");
break;
case CefTerminationStatus.ProcessWasKilled:
Log.Error("Browser was killed.");
break;
case CefTerminationStatus.ProcessCrashed:
Log.Error("Browser crashed while.");
break;
default:
Log.Error($"Browser terminated with unhandled status '{status}' while at address.");
break;
}
RenderProcessTerminated?.Invoke(browserControl, status);
}
/// <summary>
/// Fires when the render process terminates unexpectedly.
/// </summary>
public event EventHandler<CefTerminationStatus> RenderProcessTerminated;
}
If we have a browser object declared in the View like say:
<!--Bound to the ViewModel.Address property-->
<cef:ChromiumWebBrowser
x:Name="Browser"
Address="{Binding Address}">
</cef:ChromiumWebBrowser>
Then just wire in a new instance:
private readonly Dispatcher _mainDispatcher;
private readonly RequestHandler _requestHandler = new RequestHandler();
public MainWindow()
{
InitializeComponent();
_mainDispatcher = Dispatcher.CurrentDispatcher;
_requestHandler.RenderProcessTerminated += OnBrowserRenderProcessTerminated;
Browser.RequestHandler = _requestHandler;
}
private void OnBrowserRenderProcessTerminated(object sender, CefTerminationStatus e)
{
//Likely coming from a background thread
_mainDispatcher.InvokeAsync(() =>
Log.Error($"Browser crashed while at address: {Browser.Address}")
);
}
I have the following class:
/// <summary>
/// Represents an implementation of the <see cref="IAspNetCoreLoggingConfigurationBuilder"/> to configure the ASP.NET Core Logging.
/// </summary>
public class AspNetCoreLoggingConfigurationBuilder : IAspNetCoreLoggingConfigurationBuilder
{
#region Properties
/// <summary>
/// Gets the <see cref="ILogSource"/> that's used to write log entries.
/// </summary>
public ILogSource LogSource{ get; private set; }
#endregion
#region IAspNetCoreLoggingConfigurationBuilder Members
/// <summary>
/// Sets the log source that should be used to save log entries.
/// </summary>
/// <param name="logSource">The source </param>
public void SetLogSource(ILogSource logSource)
{
LogSource = logSource;
}
#endregion
}
I also have a method in which I create an instance of this class:
/// <summary>
/// Adds logging to the <see cref="IApplicationBuilder"/> request execution pipeline.
/// </summary>
/// <param name="app">The <see cref="IApplicationBuilder"/> to configure the application's request pipeline.</param>
/// <param name="configuration">Builder used to configure the ASP.NET Core Logging.</param>
/// <returns>A reference to this instance after the operation has completed.</returns>
public static IApplicationBuilder UseAspNetCoreLogging(this IApplicationBuilder app, Action<IAspNetCoreLoggingConfigurationBuilder> configuration)
{
var aspNetLoggerConfiguration = new AspNetCoreLoggingConfigurationBuilder();
configuration(aspNetLoggerConfiguration);
// Add the registered ILogSource into the registered services.
_services.AddInstance(typeof (ILogSource), aspNetLoggerConfiguration.LogSource);
// The entire configuration for the middleware has been done, so return the middleware.
return app.UseMiddleware<AspNetCoreLoggingMiddleware>();
}
Notice the first line here, I'm creating an instance of the class.
However, when I inspect this variable in a watch, when my cursor is on line configuration(aspNetLoggerConfiguration); I do get that the variable does not exists in the current context.
Creating an instance of the variable does work when doing it directly in the watch window.
Anyone has a clue?
P.S. It's a DNX project which I'm testing in xUnit. The code is running in 'Debug' mode.
Thats no runtime and no compiling-error.
It's a problem of Visual Studio not beeing able to show the object in a debug-window as it is a runtime-object (something like that).
Another occurence of this problem is in a wcf-service client. Create a new serviceclient Client and try to show client.InnerChannel in the watch window. It won't work. You can however create a temp-object (bool, string, etc..) and write the desired value into it to see your value.
#if DEBUG
var tmpLog = aspNetLoggerConfiguration.LogSource;
#endif
You should see the LogSource in the tmpLog when your mouse is over it.
So I have a typical three tiered application layered as below
DAL -> Repository -> Business -> Web.UI/API
I have been reading this article about registering dependencies by centralizing them via modules.
The web layer only has a reference to Business which only has a reference to the Repo which only has a reference to the lowest DAL layer. In this topology since the UI/API layer knows nothing about the Repository and has no reference to it, I can't register the modules in the Repository in the UI/API layer. Similarly I can't register the modules present in the DAL in the Business layer. What I want to do is start the registration process in the top most layer which then sets off a cascading effect of registrations in subsequent layers.
Typically what this would look like is each layer exposing a RegisterAllModules method and somehow trigger the RegisterAllModules method from the layer below it. Has something like this been done? Or is there another way to do this? At this point I don't know if I should roll my own logic out as I mentioned here above, since I don't know if there is a documented way to do something like this or not. Thoughts on how to best go forward here is what I am looking for.
Thanks.
Mmmm... I don't know if what follows is a proper response, but I'm going to try to give you the tools for a solution that suits your exact requirementes.
have you looked into json/xml module configuration? You do not need to know the assemblies through cross reference, you just need to know the name of the assemblies in app.config (or web.config). E.g: you can register one module for Repositories in the Repo assembly and one module for Business services in the Business.dll. This completely removes the need of cross-referencing the various assemblies (for Module scanning, you will still need references for method calls, but that is expected anyway). See here for details: http://docs.autofac.org/en/latest/configuration/xml.html#configuring-with-microsoft-configuration
if you want to enforce no call is done from (say) UI to Repo, you can leverage the "Instance Per Matching Lifetime Scope" function (see http://docs.autofac.org/en/latest/lifetime/instance-scope.html#instance-per-matching-lifetime-scope). You can use that registration method in order to enforce a Unit-of-work approach. E.g: a Repository can only be resolved in a "repository" LifetimeScope, and only Business components open scopes tagged "repository".
an alternative approach to tagged scopes is in using the "Instance per Owned<>" pattern. In this way, each Business service would require an Owned<Repository>.
Something like:
var builder = new ContainerBuilder();
builder.RegisterType();
builder.RegisterType().InstancePerOwned();
AFAICT, a correct approach would be to register the components through Modules, referenced by the Json/Xml config, and each Module should target specific LifetimeScopes.
When you a class calls the underlying layer, it should open a new LifetimeScope("underlying layer").
I will elaborate further, if you want advice on implementation strategies.
Best,
Alberto Chiesa
Edit:
I didn't knew the "composition root" meaning. Well, thanks for the info!
I favor a SIMPLE configuration file (be it the .config file or a separate .json or .xml file), because I feel that a list of modules to be imported is simpler done through a list than through a class. But this is opinion.
What is not an opinion is that you can import modules from assembly that are not referenced by the "Composition Root" assembly, in a simple and tested way.
So, I would go for Modules for every component registration, but for a textual configuration file for Module registration. YMMV.
Now, let me show you an example of the Unit of Work pattern that I'm using in many live projects.
In our architecture we make heavy use of a Service Layer, which holds responsibility for opening connections to the db and disposing them when finished, etc.
It's a simpler design than what you're after (I prefer shallow other than deep), but the concept is the same.
If you are "out" of the Service Layer (e.g. in an MVC Controller, or in the UI), you need a ServiceHandle in order to access the Service layer. The ServiceHandle is the only class that knows about Autofac and is responsible for service resolution, invocation and disposal.
The access to the Service Layer is done in this way:
non service classes can require only a ServiceHandle
invocation is done through _serviceHandle.Invoke(Func)
Autofac injects the ready to use handles via constructor injection.
This is done through the use of BeginLifetimeScope(tag) method, and registering services (in a module) in this way:
// register every service except for ServiceBase
Builder.RegisterAssemblyTypes(_modelAssemblies)
.Where(t => typeof(IService).IsAssignableFrom(t) && (t != typeof(ServiceBase)))
.InstancePerDependency();
// register generic ServiceHandle
Builder.RegisterGeneric(typeof(ServiceHandle<>))
.AsSelf()
.AsImplementedInterfaces()
.InstancePerDependency();
And registering every shared resource as InstancePerMatchingLifetimeScope("service")
So, an example invocation would be:
... in the constructor:
public YourUiClass(ServiceHandle<MyServiceType> myserviceHandle)
{
this._myserviceHandle = myserviceHandle;
}
... in order to invoke the service:
var result = _myserviceHandle.Invoke(s => s.myServiceMethod(parameter));
This is the ServiceHandle implementation:
/// <summary>
/// Provides a managed interface to access Model Services
/// </summary>
/// <typeparam name="TServiceType">The Type of the parameter to be managed</typeparam>
public class ServiceHandle<TServiceType> : IServiceHandle<TServiceType> where TServiceType : IService
{
static private readonly ILog Log = LogManager.GetLogger(typeof(ServiceHandle<TServiceType>));
private readonly ILifetimeScope _scope;
/// <summary>
/// True if there where Exceptions caught during the last Invoke execution.
/// </summary>
public bool ErrorCaught { get; private set; }
/// <summary>
/// List of the errors caught during execution
/// </summary>
public List<String> ErrorsCaught { get; private set; }
/// <summary>
/// Contains the exception that was thrown during the
/// last Invoke execution.
/// </summary>
public Exception ExceptionCaught { get; private set; }
/// <summary>
/// Default constructor
/// </summary>
/// <param name="scope">The current Autofac scope</param>
public ServiceHandle(ILifetimeScope scope)
{
if (scope == null)
throw new ArgumentNullException("scope");
_scope = scope;
ErrorsCaught = new List<String>();
}
/// <summary>
/// Invoke a method to be performed using a
/// service instance provided by the ServiceHandle
/// </summary>
/// <param name="command">
/// Void returning action to be performed
/// </param>
/// <remarks>
/// The implementation simply wraps the Action into
/// a Func returning an Int32; the returned value
/// will be discarded.
/// </remarks>
public void Invoke(Action<TServiceType> command)
{
Invoke(s =>
{
command(s);
return 0;
});
}
/// <summary>
/// Invoke a method to be performed using a
/// service instance provided by the ServiceHandle
/// </summary>
/// <typeparam name="T">Type of the data to be returned</typeparam>
/// <param name="command">Action to be performed. Returns T.</param>
/// <returns>A generically typed T, returned by the provided function.</returns>
public T Invoke<T>(Func<TServiceType, T> command)
{
ErrorCaught = false;
ErrorsCaught = new List<string>();
ExceptionCaught = null;
T retVal;
try
{
using (var serviceScope = GetServiceScope())
using (var service = serviceScope.Resolve<TServiceType>())
{
try
{
retVal = command(service);
service.CommitSessionScope();
}
catch (RollbackException rollbackEx)
{
retVal = default(T);
if (System.Web.HttpContext.Current != null)
ErrorSignal.FromCurrentContext().Raise(rollbackEx);
Log.InfoFormat(rollbackEx.Message);
ErrorCaught = true;
ErrorsCaught.AddRange(rollbackEx.ErrorMessages);
ExceptionCaught = rollbackEx;
DoRollback(service, rollbackEx.ErrorMessages, rollbackEx);
}
catch (Exception genericEx)
{
if (service != null)
{
DoRollback(service, new List<String>() { genericEx.Message }, genericEx);
}
throw;
}
}
}
catch (Exception ex)
{
if (System.Web.HttpContext.Current != null)
ErrorSignal.FromCurrentContext().Raise(ex);
var msg = (Log.IsDebugEnabled) ?
String.Format("There was an error executing service invocation:\r\n{0}\r\nAt: {1}", ex.Message, ex.StackTrace) :
String.Format("There was an error executing service invocation:\r\n{0}", ex.Message);
ErrorCaught = true;
ErrorsCaught.Add(ex.Message);
ExceptionCaught = ex;
Log.ErrorFormat(msg);
retVal = default(T);
}
return retVal;
}
/// <summary>
/// Performs a rollback on the provided service instance
/// and records exception data for error retrieval.
/// </summary>
/// <param name="service">The Service instance whose session will be rolled back.</param>
/// <param name="errorMessages">A List of error messages.</param>
/// <param name="ex"></param>
private void DoRollback(TServiceType service, List<string> errorMessages, Exception ex)
{
var t = new Task<string>
service.RollbackSessionScope();
}
/// <summary>
/// Creates a Service Scope overriding Session resolution:
/// all the service instances share the same Session object.
/// </summary>
/// <returns></returns>
private ILifetimeScope GetServiceScope()
{
return _scope.BeginLifetimeScope("service");
}
}
Hope it helps!
I am creating a Composite WPF (Prism) app with several different projects (Shell, modules, and so on). I am getting ready to implement logging, using Log4Net. It seems there are two ways to set up the logging:
Let the Shell project do all of the actual logging. It gets the reference to Log4Net, and other projects fire composite events to let the Shell know that it needs to log something. Those projects fire the events only for levels where logging is turned on in the Shell's app.config file (DEBUG, ERROR, etc), so as not to degrade performance.
Give each project, including modules, a Log4Net reference, and let the project do its own logging to a common log file, instead of sending messages to the Shell for logging.
Which is the better approach? Or, is there another approach that I should consider? Thanks for your help.
The simplest approach to logging in Prism is to override the LoggerFacade property in your Bootstrapper. By overridding the LoggerFacade, you can pass in an instance of any Logger you want with any configuration needed as long as the logger implements the ILoggerFacade interface.
I've found the following to work quite well for logging (I'm using the Enterprise Libary Logging block, but applying something similar for Log4Net should be straight forward):
Create a Boostrapper in your Shell:
-My Project
-Shell Module (add a reference to the Infrastructure project)
-Bootstrapper.cs
Create a Logging Adapter in your Infrastructure project, i.e.:
-My Project
-Infrastructure Module
-Adapters
-Logging
-MyCustomLoggerAdapter.cs
-MyCustomLoggerAdapterExtendedAdapter.cs
-IFormalLogger.cs
The MyCustomLoggerAdapter class will be used to override the 'LoggerFacade' property in the Bootstrapper. It should have a default contstructor that news everything up.
Note: by overriding the LoggerFacade property in the Bootstrapper, you are providing a logging mechanisim for Prism to use to log its own internal messages. You can use this logger throughout your application, or you can extend the logger for a more fully featured logger. (see MyCustomLoggerAdapterExtendedAdapter/IFormalLogger)
public class MyCustomLoggerAdapter : ILoggerFacade
{
#region ILoggerFacade Members
/// <summary>
/// Logs an entry using the Enterprise Library logging.
/// For logging a Category.Exception type, it is preferred to use
/// the EnterpriseLibraryLoggerAdapter.Exception methods."
/// </summary>
public void Log( string message, Category category, Priority priority )
{
if( category == Category.Exception )
{
Exception( new Exception( message ), ExceptionPolicies.Default );
return;
}
Logger.Write( message, category.ToString(), ( int )priority );
}
#endregion
/// <summary>
/// Logs an entry using the Enterprise Library Logging.
/// </summary>
/// <param name="entry">the LogEntry object used to log the
/// entry with Enterprise Library.</param>
public void Log( LogEntry entry )
{
Logger.Write( entry );
}
// Other methods if needed, i.e., a default Exception logger.
public void Exception ( Exception ex ) { // do stuff }
}
The MyCustomLoggerAdapterExtendedAdapter is dervied from the MyCustomLoggerAdapter and can provide additional constructors for a more full-fledged logger.
public class MyCustomLoggerAdapterExtendedAdapter : MyCustomLoggerAdapter, IFormalLogger
{
private readonly ILoggingPolicySection _config;
private LogEntry _infoPolicy;
private LogEntry _debugPolicy;
private LogEntry _warnPolicy;
private LogEntry _errorPolicy;
private LogEntry InfoLog
{
get
{
if( _infoPolicy == null )
{
LogEntry log = GetLogEntryByPolicyName( LogPolicies.Info );
_infoPolicy = log;
}
return _infoPolicy;
}
}
// removed backing code for brevity
private LogEntry DebugLog... WarnLog... ErrorLog
// ILoggingPolicySection is passed via constructor injection in the bootstrapper
// and is used to configure various logging policies.
public MyCustomLoggerAdapterExtendedAdapter ( ILoggingPolicySection loggingPolicySection )
{
_config = loggingPolicySection;
}
#region IFormalLogger Members
/// <summary>
/// Info: informational statements concerning program state,
/// representing program events or behavior tracking.
/// </summary>
/// <param name="message"></param>
public void Info( string message )
{
InfoLog.Message = message;
InfoLog.ExtendedProperties.Clear();
base.Log( InfoLog );
}
/// <summary>
/// Debug: fine-grained statements concerning program state,
/// typically used for debugging.
/// </summary>
/// <param name="message"></param>
public void Debug( string message )
{
DebugLog.Message = message;
DebugLog.ExtendedProperties.Clear();
base.Log( DebugLog );
}
/// <summary>
/// Warn: statements that describe potentially harmful
/// events or states in the program.
/// </summary>
/// <param name="message"></param>
public void Warn( string message )
{
WarnLog.Message = message;
WarnLog.ExtendedProperties.Clear();
base.Log( WarnLog );
}
/// <summary>
/// Error: statements that describe non-fatal errors in the application;
/// sometimes used for handled exceptions. For more defined Exception
/// logging, use the Exception method in this class.
/// </summary>
/// <param name="message"></param>
public void Error( string message )
{
ErrorLog.Message = message;
ErrorLog.ExtendedProperties.Clear();
base.Log( ErrorLog );
}
/// <summary>
/// Logs an Exception using the Default EntLib Exception policy
/// as defined in the Exceptions.config file.
/// </summary>
/// <param name="ex"></param>
public void Exception( Exception ex )
{
base.Exception( ex, ExceptionPolicies.Default );
}
#endregion
/// <summary>
/// Creates a LogEntry object based on the policy name as
/// defined in the logging config file.
/// </summary>
/// <param name="policyName">name of the policy to get.</param>
/// <returns>a new LogEntry object.</returns>
private LogEntry GetLogEntryByPolicyName( string policyName )
{
if( !_config.Policies.Contains( policyName ) )
{
throw new ArgumentException( string.Format(
"The policy '{0}' does not exist in the LoggingPoliciesCollection",
policyName ) );
}
ILoggingPolicyElement policy = _config.Policies[policyName];
var log = new LogEntry();
log.Categories.Add( policy.Category );
log.Title = policy.Title;
log.EventId = policy.EventId;
log.Severity = policy.Severity;
log.Priority = ( int )policy.Priority;
log.ExtendedProperties.Clear();
return log;
}
}
public interface IFormalLogger
{
void Info( string message );
void Debug( string message );
void Warn( string message );
void Error( string message );
void Exception( Exception ex );
}
In the Bootstrapper:
public class MyProjectBootstrapper : UnityBootstrapper
{
protected override void ConfigureContainer()
{
// ... arbitrary stuff
// create constructor injection for the MyCustomLoggerAdapterExtendedAdapter
var logPolicyConfigSection = ConfigurationManager.GetSection( LogPolicies.CorporateLoggingConfiguration );
var injectedLogPolicy = new InjectionConstructor( logPolicyConfigSection as LoggingPolicySection );
// register the MyCustomLoggerAdapterExtendedAdapter
Container.RegisterType<IFormalLogger, MyCustomLoggerAdapterExtendedAdapter>(
new ContainerControlledLifetimeManager(), injectedLogPolicy );
}
private readonly MyCustomLoggerAdapter _logger = new MyCustomLoggerAdapter();
protected override ILoggerFacade LoggerFacade
{
get
{
return _logger;
}
}
}
Finally, to use either logger, all you need to do is add the appropriate interface to your class' constructor and the UnityContainer will inject the logger for you:
public partial class Shell : Window, IShellView
{
private readonly IFormalLogger _logger;
private readonly ILoggerFacade _loggerFacade;
public Shell( IFormalLogger logger, ILoggerFacade loggerFacade )
{
_logger = logger;
_loggerFacade = loggerFacade
_logger.Debug( "Shell: Instantiating the .ctor." );
_loggerFacade.Log( "My Message", Category.Debug, Priority.None );
InitializeComponent();
}
#region IShellView Members
public void ShowView()
{
_logger.Debug( "Shell: Showing the Shell (ShowView)." );
_loggerFacade.Log( "Shell: Showing the Shell (ShowView).", Category.Debug, Priority.None );
this.Show();
}
#endregion
}
I don't think you need a separate module for the logging policy. By adding the logging policies to your infrastructure module, all other modules will get the required references (assuming you add the infrastructure module as a reference to your other modules). And by adding the logger to your Boostrapper, you can let the UnityContainer inject the logging policy as needed.
There is a simple example of uisng Log4Net on the CompositeWPF contrib project on CodePlex as well.
HTH's
I finally got back to this one, and it turns out the answer is really pretty simple. In the Shell project, configure Log4Net as a custom logger. The Prism Documentation (Feb. 2009) explains how to do that at p. 287). The Shell project is the only project that needs a reference to Log4Net. To access the logger (assuming all modules are passed a reference to the Prism IOC container), simply resolve ILoggerFacade in the IOC container, which will give you a reference to your custom logger. Pass a message to this logger in the normal manner.
So, there is no need for any eventing back to the Shell, and no need for modules to have Log4Net references. Holy mackerel, I love IOC containers!
The problem with LoggerFacade, suggested above, is that the non prism parts of your app wouldn't know about it. Logger IMHO needs to be more low level and more universally accessible than just within the Composite framework.
My suggestion is, why not just rely on standard Debug/Trace and implement your own TraceListener. This way it will work well for both Prism/nonPrism parts. You can achieve desired level of flexibility with this.
Having separate logger configurations for each module might turn into problems at deployment. Remember that a power user or administrator may completely change the target of your logging, redirecting to a database or to a central repository aggregated logging service (like my company's one). If all separate modules have separate configurations, the power user/admin has to repeat the configuration for each module (in each .config file, or in each module's section in the main app.config), and repeat this every time a change in location/formatting occurs. And besides, given that the appenders are added at run time from configuration and there may be appenders you don't know anything about at the moment, someone may use an appender that locks the file and result in conflict between the app modules. Hsving one single log4.net config simplifies administration.
Individual modules can still be configure as for the needs of each one, separately (eg. INFO for DB layer, ERROR for UI layer). Each module would get the logger by asking for its own type: LogManager.GetLogger(typeof(MyModule); but only the Shell will configure the logger (eg. call XmlConfigurator.Configure), using its own app.config.