Error logging in Windows Application - c#

I have a Windows application and I want to log errors using Log4Net.
The issue I'm facing is where to log these errors. If I'm logging to a local folder then the logs will be on the clients' machine, which isn't conveniently accessible to us.
So, we thought of 2 options:
Log Error on a shared (network) location
Log Error in DB
The issue in logging errors in the DB is that if there is an issue in connecting to DB then logging obviously fails, so we decided to log on a shared location.
Now, someone informed me that it is not a good practice to log to a shared location.
What can you do to ensure logging when it's conceivable that every place you can log might be inaccessible or down?

What shared location?
Generally you are always at risk of not getting the log, because the client machine can lose network connectivity. In this case logging to shared location, database, web service, whatever else remote is not going to help you.
Possible the least amount of friction professional solution would be to use a service like https://raygun.io/ It's not the only one there are other similar services.
Failing that, logging to a database is usually quite adequate. In practice if you cant log in the database it's because of one of the following:
Connectivity problem. (Either the machine lost network connection or the DB server did or it's offline)
A bug in the logging code
Again, in practice, the probability of the latter is quite small given adequate testing, since the logging code do not tend to be all that complicated. The former is much more probable, so you need to handle this scenario separately. Usually it's a good idea to log locally (local file / event log) if the database logging failed. It is difficult to retrieve this data from the customer machine, but in the rare case you need to troubleshoot a connectivity issue it's a life saver.
Logging to a "shared location" as in "windows share" is not common and in my opinion provide no advantage over logging to a database. Both database and shared location can be down. Client computer might not have connection. In all these scenarios both options behave the same.
In certain scenarios, it makes sense to upload the saved locally logfile to the database when network connectivity is restored, but often it's an overkill.

What you're trying to accomplish is something that we call 'best effort' logging. It's a relatively language-agnostic practice that resembles:
try:
if [database]
log [database]
done // exit the logging method, database worked.
endif
if [rsyslog] // If we got here, [database] didn't work.
log [rsyslog]
done // exit the logging method, rsyslog worked.
endif
if [redis] // If we got here, both [database] *and* [rsyslog] didn't work.
log [redis]
done // exit the logging method, redis worked.
endif
finally: // nothing worked
panic // write anywhere it can, just open a file in the app directory if it must.
That's up to you to implement around whatever logging library you wish to use, though some might conceivably have that built in. It only begins to look fragmented if things go badly, however:
Most stuff will end up in the database
Database exceptions will go to rsyslog (or any logging server)
Using a third is just extra paranoia, watch redis for messages about rsyslog being existentially challenged (pub/sub might be ideal there)
.. or do it whatever way makes sense for you. If your ideal log location is the database, then just log to the database, while you ensure that you continue to make a 'best effort' if the database can't be reached. If all else fails, write to a file - you won't be doing it that often (or ideally, need to fish bits out of it) - so the location becomes much less of an issue.
The end result is, all you have to worry about doing is calling your logging method, since you know that it's going to make the best effort possible to log the data in one of several defined manners. If it gets to the point that it can't even open a file, well - you've probably got lots of other interesting logs to look into as well :)

Related

Should I use a Message Queue Broker for logging in a centralized schema?

I want to make a centralized log for our infrastructure because of the ease of going to one place to find everything. We currently have about 15~20 different systems we would like to log and I was thinking in using nlog to a web-service.
So far so good, but then I read thread which exposes that:
Availability of the central repository is a little more complicated than just 'if you can't connect, don't log it' because usually the most interesting events occur when there are problems, not when things go smooth
So the author (Remus Rusanu) said that using MSMQ is a good way to go if I are in a Microsoft enviroment (which I am). I think this has some sense, so I wanted to see other opinions and I found this another article where the general idea is that MSMQ is too much for "just logging", just this time the reason about this decision was "Performance".
So, in your experience, should I worry about High Availability of the centralized logger or I just should log to local files when the service is not available.

How to prevent NHibernate long-running process from locking up web site?

I have an NHibernate MVC application that is using ReadCommitted Isolation.
On the site, there is a certain process that the user could initiate, and depending on the input, may take several minutes. This is because the session is per request and is open that entire time.
But while that runs, no other user can access the site (they can try, but their request won't go through unless the long-running thing is finished)
What's more, I also have a need to have a console app that also performs this long running function while connecting to the same database. It is causing the same issue.
I'm not sure what part of my setup is wrong, any feedback would be appreciated.
NHibernate is set up with fluent configuration and StructureMap.
Isolation level is set as ReadCommitted.
The session factory lifecycle is HybridLifeCycle (which on the web should be Session per request, but on the win console app would be ThreadLocal)
It sounds like your requests are waiting on database locks. Your options are really:
Break the long running process into a series of smaller transactions.
Use ReadUncommitted isolation level most of the time (this is appropriate in a lot of use cases).
Judicious use of Snapshot isolation level (Assuming you're using MS-SQL 2005 or later).
(N.B. I'm assuming the long-running function does a lot of reads/writes and the requests being blocked are primarily doing reads.)
As has been suggested, breaking your process down into multiple smaller transactions will probably be the solution.
I would suggest looking at something like Rhino Service Bus or NServiceBus (my preference is Rhino Service Bus - I find it much simpler to work with personally). What that allows you to do is separate the functionality down into small chunks, but maintain the transactional nature. Essentially with a service bus, you send a message to initiate a piece of work, the piece of work will be enlisted in a distributed transaction along with receiving the message, so if something goes wrong, the message will not just disappear, leaving your system in a potentially inconsistent state.
Depending on what you need to do, you could send an initial message to start the processing, and then after each step, send a new message to initiate the next step. This can really help to break down the transactions into much smaller pieces of work (and simplify the code). The two service buses I mentioned (there is also Mass Transit), also have things like retries built in, and error handling, so that if something goes wrong, the message ends up in an error queue and you can investigate what went wrong, hopefully fix it, and reprocess the message, thus ensuring your system remains consistent.
Of course whether this is necessary depends on the requirements of your system :)
Another, but more complex solution would be:
You build a background robot application which runs on one of the machines
this background worker robot can be receive "worker jobs" (the one initiated by the user)
then, the robot processes the jobs step & step in the background
Pitfalls are:
- you have to programm this robot very stable
- you need to watch the robot somehow
Sure, this is involves more work - on the flip side you will have the option to integrate more job-types, enabling your system to process different things in the background.
I think the design of your application /SQL statements has a problem , unless you are facebook I dont think any process it should take all this time , it is better to review your design and check where is the bottleneck are, instead of trying to make this long running process continue .
also some times ORM is not good for every scenario , did you try to use SP ?

Using msmq for asynchronous logging

I need to do logging in our application and would like to keep the time consumed due to logging as little as possible. I am thinking of using MSMQ so that the application will log into MSMQ and then I can log the messages from MSMQ to the database/files asynchronously.
Is this idea good in terms of performance? or logging to flat files synchronously using log4net is better.
Also , I am thinking of coding a logging abstraction layer so that I plug in any logging tools later without affecting other code.
Please advise.
Thanks,
sveerap
I would advise against this. This is a needlessly complex solution for a problem that doesn't really exist. I've used log4net in multiple projects and never saw any significant performance degradation because of it.
It's a better idea to take good care of selecting the right logging levels for each log message (DEBUG, INFO, WARN, etc). When you start your project and maybe during a short time when you're in production you log everything from DEBUG to higher levels. When you're confident everything works, you switch to INFO in the configuration. This should be enough to tackle any performance issues you may encounter with logging.
Concerning your abstraction layer, I wouldn't do this either. Log4net itself abstracts all details of the logging itself via its logger appenders. And if you really want this, you may also want to take a look at Common.Logging.
For what it's worth, there are scenarios where this isn't overkill. But, for most applications I would say that it is.
I work in an environment that is comprised of several z/OS mainframes and a variety of *nix midranges. The systems all write logging messages to a shared queue which is processed. Organisationally, it was found to provide better throughput and to ensure the consistency of the log.
With that said, I can see the advantage of using this approach for your applications. For example, developing a lot of internal applications and having a common log (for example, in the database - have a process which comes and reads the queue and writes it to the database) would allow you to aggregate all of your messages.
However, you will probably find that log4net or another .NET logging package will perfectly suit your needs.
There are advantages to both, but as everyone else has said - using MSMQ for logging is (probably) like going after a fly with a bazooka.
Honestly, MSMQ seems overkill for logging messages. Unless you absolutely need reliable delivery of the log messages, log4net seems to be a perfectly fit solution. keep also in mind that creating a message in MSMQ might take longer than actually writing to a buffered file.
You may also want to have a look at the System.Diagnostics.Trace object.

Filling Windows XP Security Event Log

I am in need of filling the Windows Security Event Log to a near full state. Since write access to this log is not possible, could anybody please advise as to an action that could be programatically performed which would add an entry to this log? It does not need to be of any significance as long as it gives an entry (one with the least overhead would be desired as it will need to be executed thousands of times).
This is needed purely for testing purposes on a testing rig, any dirty solution will do. Only requirement is that it's .NET 2.0 (C#).
You can enable all the security auditing categories in local security policy (secpol.msc | Local Policies | Audit Policy). Object access tends to give plenty of events. Enabling file access auditing, then set audit for everyone on some frequently accesses files and folders will also generate lots of events.
And that's normal usage, and that includes any programmatic access to those resources being audited (its all programmatic in the end, just someone else's program).
Enable Login Auditing as Richard mentioned above. Success or Failure is dependent upon how you handle step 2:
Use LoginUser to impersonate a local user on the system - or FAIL to impersonate that local user on the system. Tons of samples via good for viable C# implementations.
Call in a tight loop, repeatedly.
Another approach you can take involves engaging object access, and doing a large number of file or register I/O operations. This will also cause the log to fill out completely in an extremely short period of time.

Logging to files or to event viewer?

I was wondering what is the 'correct' way to log information messages; to files, or to a special log in the event viewer?
I like logging to files since I can use rolling flat file listener and see fresh new log from each day, plus in the event viewer I can only see one message at a time - where in a file I can scan through the day much easily. My colleague argues that files just take up space and he likes having his warnings, errors and information messages all in one place. What do you think? Is there a preferred way? If so, why?
Also, are there any concurrency issues in any of the methods? I have read that entlib is thread-safe and generates a Monitor.Enter behind if the listener is not thread safe, but I want to make sure (we're just using Logger.Write). We are using entlib 3.1.
Thank you in advance.
Here's the rule of thumb that I use when logging messages.
EventLog (if you have access of course)
- We always log Unhandled Exceptions
- In most cases we log Errors or Fatals
- In some cases we log Warnings
- In some very rare cases we log Information
- We will never log useless general messages like: "I'm here, blah, blah, blah"
Log File
- General rule, we log everthing but can chose the type of level or filter to use to turn down the volume of messages being logged
The EventLog is always a good option because its bound to WMI. This way products like Open View and alike, can monitor and alert ops if something went haywire. However, keep the messages to a minimum because its slow, its size limited on a per messaeg basis and it, entry limit as you can easily fill up the EventLog quite quickly and you application has to handle the dreaded "EventLog is Full" exception :)
Hope this helps...
There is no 'correct' way. It depends on your requirements.
You 'like' looking at flat files but how many (thousands) of lines can you really read every day?
What you seem to need is a plan (policy) and that ought to involve some tooling. Ask yourself how quickly will you notice an anomaly in the logs? And the absence of something normal?
The eventlog is a bit more work/overhead but it can be easily monitored remotely (multiples servers) by some tool. If you are using (only) manual inspection, don't bother.
In enterprise applications there are different types of logs such as -
Activity logs - Technical logs which instrument a process and are useful in debugging
Audit logs - logs used for auditing purpose. Availability of such logs is a legal requirements in some cases.
What to store where: -
As far as the Audit logs or any logs with sensitive information are concerned they should go to database where they can be stored safely.
For Activity logs my preference is to files. But we should also have different log levels such as Error, Info, Verbose etc which should be configurable. This will make it possible to save space and time required for logging when it is not needed.
You should write to event log only when you are not able to write to a file.
Consider asking your customer admins or technical support people where they want the logs to be placed.
As to being thread-safe, yes, EntLib is thread-safe.
I would recommend Event-viewer but in cases where you don't have admin rights or particular access to Event-viewer, Logging to normal files would be better option.
I prefer logging to a database, that way I can profile my logs and generate statistics and trends on errors occurring and fix the most frequent ones.
For external customers I use a web service called async to report the error. (I swallow any expections in it so any logging error's wouldn't affect the client - not that I've had any, using log4net and L4NDash).

Categories

Resources