Distinguishing source context in log4net output - c#

I'm fairly new to log4net, and I can't seem to find any clear examples of how to handle this situation.
I have a communication stack that consists of three layers: hardware, transport, and protocol. The three layers are contained inside of a manager class. As far as the user of the code is concerned they create the manager with a hardware type (Serial,Ethernet,SSL,etc) and provide an address. There can be multiple manager instances, each connecting to a different target.
I'd like my output to give context of which connection the particular message came from (127.0.0.1 or COM5 etc). The ThreadContext isn't much use because the manager can be called from any thread and each layer runs on its own thread.
Is there any way to set a context based on a particular instance of an object? Or is there a better way to handle the output formatting?

A way of adding additional per-message context is to not only log a string message but create your own message object containing both log information and the connection hardware type (and any additional information you would like to include).
You can find an example of this here.
Another option could be using a Nested Diagnostics Context:
using(NDC.Push("<Connection type>"))
{
// perform your logging here
}
The NDC data will be included with the message and can be output with the %ndc pattern. A note of warning though, the NDC will be included with ANY messages logged within its using scope, which is perhaps why you would consider going the custom message route.

You should use an overload of LogManager.GetLogger() that takes string name, this way you can pass pretty much anything as the logger name.

Related

Can NLog receive a variable, whose value changes at run time, from a C# application? Ex. a batch Id

Can a C# application pass NLog a variable, at runtime, that can be then used as input for insertion into a database?
Ex. I have an application where it receives, during the processing of files, a unique batch Id as a command line parameter. Multiple occurrences of this executable can be launched at the same time and each one will receive its own Batch ID. For the sake of troubleshooting ... I REALLY need to have NLog pass that Batch Id to the insertion of the log(s).
I saw there was a ${var} layout renderer but that is meant to already be defined within the config file. Well all of these executables will be sharing the same config file. So that doesn't seem to be a solution.
Any assistance and code examples (or links to examples) are appreciated.
Thank you for your time.
It sounds like your batchId is global to the process, so you could check out the Gdc layout renderer.
Global Diagnostic Context - a dictionary structure to hold per-application-instance values.
Platforms Supported: All (NLog 4.1 allows storing any Object type, not just String)
Use the Global Diagnostics Context when you want to make certain information available to every logger in the current process.
The documentation has explains how to use it, but very briefly: In your configuration file you would use ${gdc:item=batchId} where you want to put the batchId in your logs. Then, somewhere in the application (in the Main function, I'd wager) you'd do: GlobalDiagnosticsContext.Set("batchId", batchId);.
I don't know off hand the namespace where GlobalDiagnosticsContext lives.
If GlobalDiagnosticsContext is too broad, there is also Mapped Diagnostics Logical Context (MDLC) (Replaces the legacy MDC, as MDLC also supports async Tasks)

Persist a variable in WCF application per instance

I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.

Implementation of message in publish-subscribe pattern?

I'm currently implementing the publish-subscribe pattern for use in my future applications. Right now I'm having trouble figuring out the "best" way to design the message part of the pattern. I have a couple of ideas in mind but please tell me if there's a better way to do it.
Idea 1: Each message is an object that implements a simple tag interface IMessage.
Idea 2: Each message is represented as an array where the first index is the type of message and the second contains the payload.
Are any of these "better" than the other and if so, why? Please excuse me if this seems like a stupid question.
Your first idea make more sense, take a look at the NServiceBus github implementation of messaging patterns using marker interfaces or unobtrusive message definitions.
In essence a message in publish/subscribe scenario is an event, it's name should describe the event and have the relevant reference to data related to this event.
Andreas has a good article
HTH
Both approaches are useful. The first is useful when working with the message in your application. The second is useful if you are receiving raw message data over the network and have to determine how to deserialize it.
If you look at how WCF serializes, then they put the type as an attribute in the serialization, so it knows what to deserialize it to. However if you are going for JSON serialization fx, then you are probably better off having a property to hold your type information. Also be aware that this type information does not have to specify an actual CLR type, just an identifier to let you know how to read the data.
Once you know how to read the data, then you can create your object and take advantage of the type system, ex. using tag interfaces.
You don't specify whether your messages cross process boundaries or not.
In the latter case, where messages are passed between layers in the same application, the first approach where messages are just objects (optionally implementing the same interface) is probably the easiest.
In the former, where you have interprocess and interoperable messaging, I think you get the most of XML. XML is very flexible, easy to support in different techologies, allows you to sign messages in an interoperable way (XMLDSig) and allows you to create variety of different input/output ports (tcp/http/database/filesystem). Also, messages can be easily validated for their integrity with XSD specifications.
In the pypubsub library (a publish-subscribe for python), I found that there was great benefit to name the payload data, so the sender and receiver can just populate fields and not have to rely on order of items in message, plus it provides "code as documentation". For example, compare these, written in pseudocode. Using array:
function listener(Object[] message):
do stuff with message[0], message[1], ...
message = { 123, 'abc', obj1 } // an array
sendMessage('topicName', message)
Using keywords:
function listener(int radius, string username = None):
do stuff with radius, username, ...
// username is marked as optional for receiver but we override the default
sendMessage('topicName', radius=123, username='abc')
Doing this in C# may be more of a challenge than in Python, but that capability is really useful in pypubsub. Also, you can then use XML to define the schema for your messages, documenting the payload items, and you can mark some payload items as optional (when they have a default value) vs required (when they don't). The lib can also check that the listener adheres to the "payload contract", and that the sender is providing all data promised via the "contract".
You should probably take a look at (and even use) existing libraries to get some ideas (pypubsub is at pypubsub.sourceforge.net).
Both approaches are viable, the second one involves that you are responsible for the de/serialization of the message, it gives you much more freedom, power and control over the message, espacially versioning, but all this comes at a cost and I see this cost sustainable only if some of the actors are not .net actors. Otherwise go with the first approach and as Sean pointed out take a look at toolkits and frameworks that can greatly help you with all the plumbing.

How to maintain Id's of log-entries in an agile project

Ok so Ive run into a situation I would like to resolve with minimum impact on our development group.
We are using log4net as our logging framework in a largish c# system (~40 production assemblies).
Now our support end wants to be able to correlate logged events with a database they maintain separately. A reasonable request.
In production our main log repository is the Windows Event-Log.
At the developer side our current pattern is this:
Whenever you want to log from a component, you instantiate a logger like this at the top of the class:
private static readonly ILogger Log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod())
If you need stuff in the logging context, you put it in as-early-as-possible in the flow of every Thread, ie. at the receiving end of service calls etc.
Whenever you want to do logging, you simply do
Log.Warn(str, ex) - (or Info, Error etc)
Now we want to "fix" this log-entry to a unique "eventId", and we can supply an extension method to ILogger, that will allow us to do:
Log.Warn(int, str, ex), when "int" is a number with these properties:
It is "mapped" to a durable store.
It points to one and only one Log
entry
If the source code Log statement is removed, the Id is not
reused for a new log statement.
My immediate solution would be to maintain a global enum, that would cover the set of possible "eventId"'s and just instruct the developers to "use them only once".
We would then proceed to do some sort of "intelligent" mapping between our Namespaces and "CategoryId" - f.ex eveything in the "BusinessLayer" namespace gets one categoryId assigned.
But I think there is something I'm missing....
Any thoughts would be appreciated on:
How do you use EventId and CategoryId in your large systems? (Or "What" do you use them for)
Does any of you have an example of a "dynamic" way of creating the EventId's, in such a way that you can maintain the simple approach to logging, that does not require the developer to supply a unique Id at code-statement level.
Sorry if my question is too broad, I am aware that Im fishing a bit here.

Seeking One-Size-Fits-All Context Based Storage

First off, I wish context based storage was consistent across the framework!
With that said, I'm looking for an elegant solution to make these properties safe across ASP.NET, WCF and any other multithreaded .NET code. The properties are located in some low-level tracing helpers (these are exposed via methods if you're wondering why they're internal).
I'd rather not have a dependency on unneeded assemblies (like System.Web, etc). I don't want to require anyone using this code to configure anything. I just want it to work ;) That may be too tall of an order though...
Anyone have any tricks up their sleeves? (I've seen Spring's implementation)
internal static string CurrentInstance
{
get
{
return CallContext.LogicalGetData(currentInstanceSlotName) as string;
}
set
{
CallContext.LogicalSetData(currentInstanceSlotName, value);
}
}
internal static Stack<ActivityState> AmbientActivityId
{
get
{
Stack<ActivityState> stack = CallContext.LogicalGetData(ambientActivityStateSlotName) as Stack<ActivityState>;
if (stack == null)
{
stack = new Stack<ActivityState>();
CallContext.LogicalSetData(ambientActivityStateSlotName, stack);
}
return stack;
}
}
Update
By safe I do not mean synchronized. Background on the issue here
Here is a link to (at least part of) NHibernate's "context" implementation:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Context/
It is not clear to me exactly where or how this comes into play in the context of NHibernate. That is, if I wanted to store some values in "the context" would I get "the context" from NHibernate and add my values? I don't use NHibernate, so I don't really know.
I suppose that you could look and determine for yourself if this kind of implementation would be useful to you. Apparently the idea would be to inject the desired implementation, depending on the type of application (ASP.NET, WCF, etc). That probably implies some configuration (maybe minimal if one were to use MEF to load "the" ICurrentSessionContext interface).
At any rate, I found this idea interesting when I found it some time ago while searching for information on CallContext.SetData/GetData/LogicalSetData/LogicalGetData, Thread.SetData/GetData, [ThreadStatic], etc.
Also, based on your use of CallContext.LogicalSetData rather than CallContext.SetData, I assume that you want to take advantage of the fact that information associated with the logical thread will be propagated to child threads as opposed to just wanting a "thread safe" place to store info. So, if you were to set (pr Push) the AmbientActivity in your app's startup and then not push any more activities, any subsequent threads would also be part of that same activity since data stored via LogicalSetData is inherited by child threads.
If you have learned anything in the meantime since you first asked this question I would be very interested in hearing about it. Even if you haven't, I would be interested in learning about what you are doing with the context.
At the moment, I am working on maintaining some context information for logging/tracing (similar to Trace.CorrelationManager.ActivityId and Trace.CorrelationManager.LogicalOpertionStack and log4net/NLog context support). I would like to save some context (current app, current app instance, current activity (maybe nested)) for use in an app or WCF service AND I want to propagate it "automatically" across WCF service boundaries. This is to allow logging statements logged in a central repository to be correlated by client/activity/etc. We would be able to query and correlate for all logging statements by a specific instance of a specific application. The logging statements could have been generated on the client or in one or more WCF services.
The WCF propagation of ActivityId is not necessarily sufficient for us because we want to propagate (or we think we do) more than just the ActivityId. Also, we want to propagate this information from Silverlight clients to WCF services and Trace.CorrelationManager is not available in Silverlight (at least not in 4.0, maybe something like it will be available in the future).
Currently I am prototyping the propagation of our "context" information using IClientMessageInspector and IDispatchMessageInspector. It looks like it will probably work ok for us.
Regarding a dependency on System.Web, the NHibernate implementation does have a "ReflectiveHttpContext" that uses reflection to access the HttpContext so there would not be a project dependency on System.Web. Obviously, System.Web would have to be available where the app is deployed if HttpContext is configured to be used.
I don't know that using CallContext is the right move here if the desire is simply to provide thread-safe access to your properties. If that is the case, the lock statement is all you need.
However, you have to make sure you are applying it correctly.
With CallContext, you are going to get thread-safe access because you are going to have separate instances of CallContext when calls come in on different threads (or different stores, rather). However, that's very different from making access to a resource thread-safe.
If you want to share the same value across multiple threads, then the lock statement is the way to go. Otherwise, if you want specific values on a per-thread/call basis, use the CallContext, or use the static GetData/SetData methods on the Thread class, or the ThreadStatic attribute (or any number of thread-based storage mechanisms).

Categories

Resources