Using msmq for asynchronous logging - c#

I need to do logging in our application and would like to keep the time consumed due to logging as little as possible. I am thinking of using MSMQ so that the application will log into MSMQ and then I can log the messages from MSMQ to the database/files asynchronously.
Is this idea good in terms of performance? or logging to flat files synchronously using log4net is better.
Also , I am thinking of coding a logging abstraction layer so that I plug in any logging tools later without affecting other code.
Please advise.
Thanks,
sveerap

I would advise against this. This is a needlessly complex solution for a problem that doesn't really exist. I've used log4net in multiple projects and never saw any significant performance degradation because of it.
It's a better idea to take good care of selecting the right logging levels for each log message (DEBUG, INFO, WARN, etc). When you start your project and maybe during a short time when you're in production you log everything from DEBUG to higher levels. When you're confident everything works, you switch to INFO in the configuration. This should be enough to tackle any performance issues you may encounter with logging.
Concerning your abstraction layer, I wouldn't do this either. Log4net itself abstracts all details of the logging itself via its logger appenders. And if you really want this, you may also want to take a look at Common.Logging.

For what it's worth, there are scenarios where this isn't overkill. But, for most applications I would say that it is.
I work in an environment that is comprised of several z/OS mainframes and a variety of *nix midranges. The systems all write logging messages to a shared queue which is processed. Organisationally, it was found to provide better throughput and to ensure the consistency of the log.
With that said, I can see the advantage of using this approach for your applications. For example, developing a lot of internal applications and having a common log (for example, in the database - have a process which comes and reads the queue and writes it to the database) would allow you to aggregate all of your messages.
However, you will probably find that log4net or another .NET logging package will perfectly suit your needs.
There are advantages to both, but as everyone else has said - using MSMQ for logging is (probably) like going after a fly with a bazooka.

Honestly, MSMQ seems overkill for logging messages. Unless you absolutely need reliable delivery of the log messages, log4net seems to be a perfectly fit solution. keep also in mind that creating a message in MSMQ might take longer than actually writing to a buffered file.
You may also want to have a look at the System.Diagnostics.Trace object.

Related

Should I use a Message Queue Broker for logging in a centralized schema?

I want to make a centralized log for our infrastructure because of the ease of going to one place to find everything. We currently have about 15~20 different systems we would like to log and I was thinking in using nlog to a web-service.
So far so good, but then I read thread which exposes that:
Availability of the central repository is a little more complicated than just 'if you can't connect, don't log it' because usually the most interesting events occur when there are problems, not when things go smooth
So the author (Remus Rusanu) said that using MSMQ is a good way to go if I are in a Microsoft enviroment (which I am). I think this has some sense, so I wanted to see other opinions and I found this another article where the general idea is that MSMQ is too much for "just logging", just this time the reason about this decision was "Performance".
So, in your experience, should I worry about High Availability of the centralized logger or I just should log to local files when the service is not available.

Log4Net logging options (FileAppender vs. EventLogAppender)

I have implemented into my C# project Log4Net logging features.
Right now I'm using EventLogAppender to log all the errors, but I want to know if the FIleAppender is a better approach. I have my concerns about performance when saving to a file instead of logging into system events.
What are the benefits of using FileAppender vs EventLogAppender?
Performance wise, both are fast, but I suspect File based will be faster. If you are writing so many logs that this is a concern, then your program sounds pretty "chatty" and so I would go with the FileAppender--system logs are nice when logs are concise and occasional, but they quickly get tedious if they are long and/or frequent. File based logs are generally easier to archive, if that's a concern. EventLogs, on the other hand, are nice if you are already monitoring the event logs or if you want to put everything in a "standard" place--that is, the user will always know where to look.
Note that you don't have to choose just one or the other--you can do your short/occasional status updates in the event log and details in the file log--that's the approach I usually take.

Please help me design this event reporting system

I'm trying to design a system which reports activity events to a database via a web service. The web service and database have already been built (COTS software) - all I have to do is provide the event source.
The catch, though, is that the event source needs to be fault tolerant. We have multiple replicated databases that I can talk to, so if the web service or database I'm talking to goes down, the software can quickly switch to another one that's up.
What I need help with though is the case when all the databases are down. I've already designed a queue that will hold on to the events as they pile in (and burst them out once the connection is restored), but the queue is an in-memory structure: if my app crashes in this state, or if power is lost, etc., then all the events in the queue are lost. This is unacceptable. What I need is a way to persist the events so that when a database comes back online I can send a burst of queued-up events, even in the event of power loss or crash.
I know that I don't want to re-implement the queue itself to use the file system as a backing store. This would work (and I've tried it) - but that method slows the system down dramatically as the hard drive becomes a bottleneck. Aside from this though, I can't think of a single way to design this system such that all the events are safely stored on the hard drive only when access to the database isn't available.
Does anyone have any ideas? =)
When I need messaging with fault tolerance (and/or guaranteed delivery, which based on your description I am guessing you also need), I usually turn to MSMQ. It provides both fault tolerance (messages are stored on disk in case of machine restart) and guaranteed delivery (messages will automatically and continually resend until they are received), as well as transactional sends and receives, message journaling, poison message handling, and other features.
I have been able to achieve a throughput of several thousand messages per second using MSMQ. Frankly, I am not sure that you will get too much better than that while still being fault tolerant.
msmq. I think you could also take a look at the notion of Job object.
I would agree with guys that better to use out of the box system like MSMQ with a set of messaging patterns in hand.
Anyway, if you have to do it yourself, you can use in memory database instead of serializing data yourself, I believe it should be faster enough.

Quartz.NET fail prevention/detection methods

I have nearly completed a Quartz.NET based Windows Service (using ADO.NET, not RAM jobs). The service copies/moves files to various paths depending upon a schedule. I have some concerns however. It is very important that this service has some sort of detection method/system that will detect when the program has failed for whatever reason - whether it's files failing to be copied, or the whole scheduler crashing . Just wondering what you guys think is the best way to do this? I have a couple of vague ideas but I'm looking to hear some more input.
Here are the methods that we use:
We monitor the windows service itself using the IT monitoring system. We use one of those commercial products that monitors servers, services, databases, etc, but there are open source projects that can do this for you if you don't already have one in place.
We log fatal execeptions to a database table and have a separate service monitoring that table for exceptions.
We also use an ADO.Net store, so we also monitor the Quartz.net tables for things like stuck triggers.
With things like this you can definitely go down the over engineering path. Just keep in mind the cost benefit of adding each of these options and then decide how much work you want to put into monitoring, VS the cost of an outage.

Logging to files or to event viewer?

I was wondering what is the 'correct' way to log information messages; to files, or to a special log in the event viewer?
I like logging to files since I can use rolling flat file listener and see fresh new log from each day, plus in the event viewer I can only see one message at a time - where in a file I can scan through the day much easily. My colleague argues that files just take up space and he likes having his warnings, errors and information messages all in one place. What do you think? Is there a preferred way? If so, why?
Also, are there any concurrency issues in any of the methods? I have read that entlib is thread-safe and generates a Monitor.Enter behind if the listener is not thread safe, but I want to make sure (we're just using Logger.Write). We are using entlib 3.1.
Thank you in advance.
Here's the rule of thumb that I use when logging messages.
EventLog (if you have access of course)
- We always log Unhandled Exceptions
- In most cases we log Errors or Fatals
- In some cases we log Warnings
- In some very rare cases we log Information
- We will never log useless general messages like: "I'm here, blah, blah, blah"
Log File
- General rule, we log everthing but can chose the type of level or filter to use to turn down the volume of messages being logged
The EventLog is always a good option because its bound to WMI. This way products like Open View and alike, can monitor and alert ops if something went haywire. However, keep the messages to a minimum because its slow, its size limited on a per messaeg basis and it, entry limit as you can easily fill up the EventLog quite quickly and you application has to handle the dreaded "EventLog is Full" exception :)
Hope this helps...
There is no 'correct' way. It depends on your requirements.
You 'like' looking at flat files but how many (thousands) of lines can you really read every day?
What you seem to need is a plan (policy) and that ought to involve some tooling. Ask yourself how quickly will you notice an anomaly in the logs? And the absence of something normal?
The eventlog is a bit more work/overhead but it can be easily monitored remotely (multiples servers) by some tool. If you are using (only) manual inspection, don't bother.
In enterprise applications there are different types of logs such as -
Activity logs - Technical logs which instrument a process and are useful in debugging
Audit logs - logs used for auditing purpose. Availability of such logs is a legal requirements in some cases.
What to store where: -
As far as the Audit logs or any logs with sensitive information are concerned they should go to database where they can be stored safely.
For Activity logs my preference is to files. But we should also have different log levels such as Error, Info, Verbose etc which should be configurable. This will make it possible to save space and time required for logging when it is not needed.
You should write to event log only when you are not able to write to a file.
Consider asking your customer admins or technical support people where they want the logs to be placed.
As to being thread-safe, yes, EntLib is thread-safe.
I would recommend Event-viewer but in cases where you don't have admin rights or particular access to Event-viewer, Logging to normal files would be better option.
I prefer logging to a database, that way I can profile my logs and generate statistics and trends on errors occurring and fix the most frequent ones.
For external customers I use a web service called async to report the error. (I swallow any expections in it so any logging error's wouldn't affect the client - not that I've had any, using log4net and L4NDash).

Categories

Resources