I have implemented into my C# project Log4Net logging features.
Right now I'm using EventLogAppender to log all the errors, but I want to know if the FIleAppender is a better approach. I have my concerns about performance when saving to a file instead of logging into system events.
What are the benefits of using FileAppender vs EventLogAppender?
Performance wise, both are fast, but I suspect File based will be faster. If you are writing so many logs that this is a concern, then your program sounds pretty "chatty" and so I would go with the FileAppender--system logs are nice when logs are concise and occasional, but they quickly get tedious if they are long and/or frequent. File based logs are generally easier to archive, if that's a concern. EventLogs, on the other hand, are nice if you are already monitoring the event logs or if you want to put everything in a "standard" place--that is, the user will always know where to look.
Note that you don't have to choose just one or the other--you can do your short/occasional status updates in the event log and details in the file log--that's the approach I usually take.
Related
I want to make a centralized log for our infrastructure because of the ease of going to one place to find everything. We currently have about 15~20 different systems we would like to log and I was thinking in using nlog to a web-service.
So far so good, but then I read thread which exposes that:
Availability of the central repository is a little more complicated than just 'if you can't connect, don't log it' because usually the most interesting events occur when there are problems, not when things go smooth
So the author (Remus Rusanu) said that using MSMQ is a good way to go if I are in a Microsoft enviroment (which I am). I think this has some sense, so I wanted to see other opinions and I found this another article where the general idea is that MSMQ is too much for "just logging", just this time the reason about this decision was "Performance".
So, in your experience, should I worry about High Availability of the centralized logger or I just should log to local files when the service is not available.
I have a large web application in ASP .NET (not that the technology matters here) that does not currently have a way to be locked down for maintenance without current users losing their work. Since I have not implemented something like this before, I would like to hear about some of the standard precautions and steps developers take for such an operation.
Here are some of the questions that I can think of:
Should each page redirect to a "Site down for maintenance" page or is there a more central way to prevent interaction?
How to centralize a scheduled maintenance such that user operations lock down before the site is locked down. Thus preventing loss of unsaved work.
The application is data-driven and implements transaction scopes at the business layer. It does not use load balancing or replication. I may be wrong, but it does not 'feel right' to have the BLL handle this. Any suggestions or links to articles would be appreciated.
One way to make a maintenance page is to use the app_offline.htm feature of IIS. Using this feature you will be able to show the same html page to all your users notifying them about the maintenance.
There is a nice post here in StackOverflow about it. ASP.NET 2.0 - How to use app_offline.htm.
Another thing you could do is to notify your users that there is a scheduled maintenance so that they also be aware and stop using the application.
That all depends on the time you need to upgrade your application. If the upgrade is to upload the new files and take not more that a minute or two, its most likely that your users wont even see it.
A non-answer answer that may be helpful: design the application so that it can be upgraded on the fly transparently to its users. Then you never have a maintenance window that users really need to worry about. There is no need to lock down the application because everything keeps working. If transactions get dropped, that's a bug in the application because there is an explicit requirement that the application can be upgraded with transactions in progress, so it's been coded to support that and there are tests that verify that functionality.
Consider as an example Netflix: does it have a locked down maintenance window? Not that the general public ever knows about. :-)
I need to do logging in our application and would like to keep the time consumed due to logging as little as possible. I am thinking of using MSMQ so that the application will log into MSMQ and then I can log the messages from MSMQ to the database/files asynchronously.
Is this idea good in terms of performance? or logging to flat files synchronously using log4net is better.
Also , I am thinking of coding a logging abstraction layer so that I plug in any logging tools later without affecting other code.
Please advise.
Thanks,
sveerap
I would advise against this. This is a needlessly complex solution for a problem that doesn't really exist. I've used log4net in multiple projects and never saw any significant performance degradation because of it.
It's a better idea to take good care of selecting the right logging levels for each log message (DEBUG, INFO, WARN, etc). When you start your project and maybe during a short time when you're in production you log everything from DEBUG to higher levels. When you're confident everything works, you switch to INFO in the configuration. This should be enough to tackle any performance issues you may encounter with logging.
Concerning your abstraction layer, I wouldn't do this either. Log4net itself abstracts all details of the logging itself via its logger appenders. And if you really want this, you may also want to take a look at Common.Logging.
For what it's worth, there are scenarios where this isn't overkill. But, for most applications I would say that it is.
I work in an environment that is comprised of several z/OS mainframes and a variety of *nix midranges. The systems all write logging messages to a shared queue which is processed. Organisationally, it was found to provide better throughput and to ensure the consistency of the log.
With that said, I can see the advantage of using this approach for your applications. For example, developing a lot of internal applications and having a common log (for example, in the database - have a process which comes and reads the queue and writes it to the database) would allow you to aggregate all of your messages.
However, you will probably find that log4net or another .NET logging package will perfectly suit your needs.
There are advantages to both, but as everyone else has said - using MSMQ for logging is (probably) like going after a fly with a bazooka.
Honestly, MSMQ seems overkill for logging messages. Unless you absolutely need reliable delivery of the log messages, log4net seems to be a perfectly fit solution. keep also in mind that creating a message in MSMQ might take longer than actually writing to a buffered file.
You may also want to have a look at the System.Diagnostics.Trace object.
I was wondering what is the 'correct' way to log information messages; to files, or to a special log in the event viewer?
I like logging to files since I can use rolling flat file listener and see fresh new log from each day, plus in the event viewer I can only see one message at a time - where in a file I can scan through the day much easily. My colleague argues that files just take up space and he likes having his warnings, errors and information messages all in one place. What do you think? Is there a preferred way? If so, why?
Also, are there any concurrency issues in any of the methods? I have read that entlib is thread-safe and generates a Monitor.Enter behind if the listener is not thread safe, but I want to make sure (we're just using Logger.Write). We are using entlib 3.1.
Thank you in advance.
Here's the rule of thumb that I use when logging messages.
EventLog (if you have access of course)
- We always log Unhandled Exceptions
- In most cases we log Errors or Fatals
- In some cases we log Warnings
- In some very rare cases we log Information
- We will never log useless general messages like: "I'm here, blah, blah, blah"
Log File
- General rule, we log everthing but can chose the type of level or filter to use to turn down the volume of messages being logged
The EventLog is always a good option because its bound to WMI. This way products like Open View and alike, can monitor and alert ops if something went haywire. However, keep the messages to a minimum because its slow, its size limited on a per messaeg basis and it, entry limit as you can easily fill up the EventLog quite quickly and you application has to handle the dreaded "EventLog is Full" exception :)
Hope this helps...
There is no 'correct' way. It depends on your requirements.
You 'like' looking at flat files but how many (thousands) of lines can you really read every day?
What you seem to need is a plan (policy) and that ought to involve some tooling. Ask yourself how quickly will you notice an anomaly in the logs? And the absence of something normal?
The eventlog is a bit more work/overhead but it can be easily monitored remotely (multiples servers) by some tool. If you are using (only) manual inspection, don't bother.
In enterprise applications there are different types of logs such as -
Activity logs - Technical logs which instrument a process and are useful in debugging
Audit logs - logs used for auditing purpose. Availability of such logs is a legal requirements in some cases.
What to store where: -
As far as the Audit logs or any logs with sensitive information are concerned they should go to database where they can be stored safely.
For Activity logs my preference is to files. But we should also have different log levels such as Error, Info, Verbose etc which should be configurable. This will make it possible to save space and time required for logging when it is not needed.
You should write to event log only when you are not able to write to a file.
Consider asking your customer admins or technical support people where they want the logs to be placed.
As to being thread-safe, yes, EntLib is thread-safe.
I would recommend Event-viewer but in cases where you don't have admin rights or particular access to Event-viewer, Logging to normal files would be better option.
I prefer logging to a database, that way I can profile my logs and generate statistics and trends on errors occurring and fix the most frequent ones.
For external customers I use a web service called async to report the error. (I swallow any expections in it so any logging error's wouldn't affect the client - not that I've had any, using log4net and L4NDash).
I have an application which can only have 1 instance running at each time, however if a 2nd instance is launched it needs to be logged to a common logfile that the first could also be using.
I have the check for how many instances are running and I was planning on simply logging it to the event logger initially but the application can be running in user or system context and exceptions are thrown when attempting to query the eventlog source as a user so that idea is scrapped as the security logs are inaccessible to the user.
So I wanted to find out what the safest method of have 2 seperate instances of the same application write to a log file would be that would ensure both get an opportunity to write to it.
I would prefer not to use an existing additional framework if avoidable
Any help appreciated.
A Mutex could be used for interprocess synchronization of a shared resource such as log file. Here's a sample.
You could always write to the system event log. No locking or anything needed and the event viewer is more robust than some give it credit for.
In response to your comment, another user asked the question about write permissions for the event log here on SO. The answer linked to the msdn article that describes how to perform that.
See that question here.
You can dodge the problem if you prefer...
If this is a windows app, you can send the first instance a message and then just quit. On receiving the message, the original instance can write to the log file without any issues.
Why not use syslog protocol ? This will allow you to deliver the logs in a very standards-based and flexible manner. The protocol itself is quite simple, but there are plenty of examples on the Net, e.g. here. If your app is destined for the enterprise use, having a standard way of logging could be a big plus. (And, you do not need to maintain the files either - it becomes a job of a specialized software that does just that)
One way to hack it would be to memory-map the log file. That way, both instances of the application are sharing the same virtual memory image of the file. Then there are a number of ways of implementing a mutex inside the file.