Cleanest way to handle all exceptions in C# - c#

I have been investigating the best way to handle all the exceptions of an application without messing much with the code. The main objective here is to send information about the exceptions to an external platform such as Application Insights.
So far I've found the following methods:
Castle Interceptor:
This is the best approach so far, the thing is that, for the methods to be intercepted either the methods are virtual, or the class must be interfaced. Since I'm working on a really big application these changes are not desired.
Events:
Using AppDomain UnhandledException is also to be considered, but since I have several app domains that would require a lot of changes a messing with classes only for the exceptions, which is not optimal since classes should not be messed just because of exception handling.
Besides the number of AppDomains I also have several threads running from which exceptions are not caught by this kind of handlers.
PostSharp:
PostSharp works similarly to Castle, and the problem here if I understood correctly, is that I would have to add attributes/decorators to all the methods I want intercepted, also not a very good approach.
If anyone has any suggestions on the best approach here I would be very appreciated.

There is a fine article located at https://dncmagazine.blob.core.windows.net/edition30/DNCMag-Issue30.pdf that discusses error handling in large projects. Probably the least intrusive approach would be to use the global exception handler. I would also suggest looking at using a library such as log4net as this can record exception details using multiple stores (local file, SQL, .....) and can be reconfigured from config files, thus avoiding code changes, recompiling and application distribution/installation.
For those not familiar with DNCMag - it is a FREE magazine for coders with many excellent articles and can be viewed at http://www.dotnetcurry.com/magazine/

Related

The best way to explore/investigate/understand class hierarchy and principle of work of new project

Imagine such situation. You get some legacy code or get some new framework. You need to investigate and understand how to work with this code as soon as possible. There is no chance to ask for help from previous developer. What are the best practices/methods/ways/steps/tools (preferred .NET Framework tools stack) to use to get maximum efficiency in investigating new to you code base.
If it is framework and there is no much documentation and unit tests, what tools you usually use to explore class hierarchy, methods, events? Is it default Object Browser, Architecture Explorer of MS Visual Studio or some other tools like Resharper hierarchy/file view?
There really isn't a best way to do this as there are so many variables and every project is different from the next.
To be absolutely honest the best way to get your head around it is to create a sandpit/test environment and, for want of a better description, play with it. Then play with it some more.
As an example of 'playing with it', using debugger and stepping through the code will tell you a lot about the flow and structure of the code. It is also worth mentioning that you should never trust comments, verify functionality yourself. Code may have changed since a comment was written.
The best way for diving into a new application with a large code base, the best solution that I've found is to get big picture of it through reverse engineering facility in applications like Enterprise Architect or so.
If it's not available to you, try class diagram provided by VS.
So far you can get the static definition of program , but for understanding the flow of execution follow the main scenarios execution path by facilities that you mat find in Resharper, VS2008(generate sequence diagrams, and ...) and VS2010(view call hierarchy and ...).
As said in previous answers debugging and profiling applications is also very helpful, set breakpoints, look at call stack, watch the objects and ....
I find that usually the best way to start with a completely unknown code base is just trying to get it to run.
After that, if there are bugs that need to be addressed, try to fix some of those.
That will give you insight into how difficult it is to update/maintain the system. You should also start to see code patterns, or lack thereof, emerge.
I often find that unit tests are a good place to start, providing there are some! At least through unit tests you get short examples of how it works, and where is should fail. Hopefully there is some documentation lying about too...
In VS2010, there is tool under Architecture which will help you analyze your code base and generate dependency diagram for you.
Check Project dependencies within a solution.
This will give you an idea about the projects flow within a solution
Check for the external Dlls used in references.
This will tell more information of the system how it is used.
Now you can make assumptions now about the flow of the architecture.
You can then run the application and Check for some logs which will give you idea about the class and functions flow.
You can then start with debugging the code/module which is assigned top you.
This will now put you in better position to now make any changes.

How to instrument code for logging C#

I'm wondering if there's any way of having some sort of Aspect-Oriented way of setting up logging of C# code. Or if the code could be instrumented for automatic logging.
At the moment the code is riddled with Log("Enter method XXX") and Log("Leaving method XXX") which make maintenance really tedious.
Ideally I'd like to have something that does the logging automatically the same way as the libraries are instrumented for profiling.
The next best thing would be to have some custom attributes maybe that I can tag my methods with. These would put some logging code on entrance and exit of the method.
And if the solution were compatible with the EntLib that would be perfect :)
Cheers.
If you're using the Enterprise Library, you have everything you need. Take a look at this article: http://www.codewrecks.com/blog/index.php/2009/01/31/unity-and-aop-in-enterprise-library/
You could use Log4PostSharp. I am not sure though what the future of this looks like as PostSharp went commercial.
What your referring too is a cross cutting concern, and not only affects your application but other applications that you might install at your establishment. The Enterprise Blocks are great and the inversion of control principal does help a lot with extracting repeating code from out of the system. However there is no way of logging without deciding some place in your code that you wish to record the event. for example exceptions, logging in, logging out, db actions, restricted actions etc. If you go the Enterrpise route its all done through configuration files and policies.
In the solutions I have provided, I have moved the logging functionality outside of the application space and it now sits aside every piece of code that I develop, ready and waiting to do the logging for me. On the last project I used a combination of Enterprise Blocks and Couchdb. Couchdb really helps with the aspect side as it works using REST and Json without involving itself too much in your application writing an interface to the log files is just a matter of a bit of HTML, it really is a fire and forget type affair, until that bad ol day when you need to scour the logs :)
The only problem that I have seen in applications where you automate the logging is that you use some sort of delegate process and pass things into them, which increases stack space. But this is so trivial that its beyond reason.
Program to interfaces and defined interfaces and you should be okay.
I remember something regarding Interceptors / Proxying to log entry/exit of methods.
Stack Overflow question - How do I intercept a method call in C#?
and check out this blog (ref'd in the same question) - http://madcoderspeak.blogspot.com/2005/09/essential-interception-using-contexts.html

Sandboxing plugins with Managed Extensibility Framework

I'm working on an application where third party developers will be able to write plugins. I've been looking a little at Managed Extensibility Framework and it seems the right way to go.
One thing though, I want to prevent plugins from accessing the rest of the application freely (calling singletons etc) but would want to restrict to to communicate via some interface, ideally each plugin would have to "request" permission for different things like accessing other plugins and user data, is there a good way to do accomplish this?
Only thing I can think of otherwise is to have a security string passed to each method and obfuscate the hell out of the code but it seems like an ugly solution :P
What you need is a new AppDomain to be the sandbox for your plugin, but I don't think MEF supports loading exports into a separate AppDomain at this time (I'm sure someone will correct me if this is no longer the case).
If this is a serious concern for you, consider using the bits in the System.Addin namespace, and see this section on Activation, Isolation, Security, and Sandboxing for more information. It's a much more robust and secure alternative to MEF, but is far less flexible.
Update: Kent Boogaart has a blog post showing how you can use MEF and MAF together.

Logging Framework, a good idea?

First of all, apologies for the subjective sounding title. This is intended as a direct question.
At present I am working on a suite of tools:
A C# Windows Service, to primarily
maintain an Oracle database.
A C# Windows Service, (which will be
used on multiple node sites) to
process content of the database.
An ASP.NET web interface to
facilitate management of the overall
"system"
Currently the tho Windows Services have been developed as Console Applications (to ease debugging/development) and I am in the midst of converting these to Services. After testing for a couple days now with these services, I'm finding that I would like to increase the granularity of my logging. I'm finding that I miss Console.WriteLine() and I would like to provide an alternate log source like a flat-file for this type of output. This has lead me to think, "Should I be using a framework, or have I got enough?"
The reason I have mentioned the aspects I am developing is to provide insight to my situation. A "Core" DLL has been created, common across all components, abstracting the interaction layer between the applications and database. It is within this DLL that a class has been created which will attempt to "log to a table in the database" else on fail "log to local Event Log". This is it, that's the extent of logging.
Throughout the aforementioned tools, there are multiple instances of logging not dissimilar to:
Log.LogError("Code", e.Message + "\n" + e.StackTrace);
Although quite basic, this method does make use of reflection to Identify the source of the error.
My Question
Looking at my current logging solution it appears "sufficient" in terms of what it does and how it is integrated with all my solutions. However, I've been looking at logging frameworks (Notably log4net) and their features impress me. The ability to, if needed in the future, add another output format (such as an SMTP server) sounds kind of cool to me! :)
What I would like to know are the benefits of moving to a framework (like log4net)? The extent of how much I will have to adapt my code? Whether or not I am just looking at the greener grass on the other side? And finally, but probably most importantly, am I doing the right thing? Should I just add the ability to my Log class to "LogDebug" and be done with it? The last thing I would want to do is completely overhaul my suite, just for a "basic" feature, but if there are other benefits (to design, reliance, good practice? etc.) I'm interested.
Thanks,
Yes. Using an existing, proven logging framework (such as Log4net) is a good idea.
Log4Net is configurable at runtime (great for tracking down issues in production code).
As a commenter pointed out, it's also very simple to use.
Proper logging is especially beneficial when running code on multiple remote systems, as far as I recall, log4net will let you send your logs to a remote syslog server without much coding overhead (meaning you can view your logs from all machines in one centralized place) doing this will massively reduce the time it takes you to get information relating to a bug or problem with the system, and should also give you an indication of how prevalent the issue is.
As mentioned in other posts, log4net also allows for multiple appenders and multiple log levels, so determining where you want certain log information (i.e. in a database or in a local flat file, hey log4net even lets you spit logs out over telnet) to be stored is an absolute doddle.
As for implementing it, there are several good sites talking you through the setup. How you actually make use of the logging objects that log4net gives you is an architectural choice, but you could simply change the constructor of an object to take a log4net object and from within this object, just use the log4net object as you would Console.WriteLine.
I find the tutorial series here particularly useful, and it'll also go in to more depth than I can here about the benefits and the different ways of configuring log4net.
Yes, you definitely want to use a logging framework. A logging framework will allow you to:
Set the logging levels for the different logger instances.
Set the "appenders" or output for each of the different logger instances.
Perhaps, more importantly, if you use a logging framework, it is very easy to swap out one implementation of the logging framework for another (perhaps a null implementation that simply discards messages); whereas, if you write all your logging statements, directly, swapping out the implementation will be a nightmare.
I think you should use Log4net, simply because it's always better to reuse than to build your own thing. log4net has been used by a lot of developers and are pretty matured.
Think about your maintenance prospect; one or two months down the road, you might need to tweak your custom logging class a bit, to add some multithreading support etc. And when you are fixing the bugs arose from your logging class, you will miss Log4net.
Well one of the bigger benefits is not having to maintain the code yourself. Most of the time, logging frameworks have a lot more functionality than your own solution. Because they are so focused on logging, those frameworks usually are pretty complete in the both functionality and ways to implement it. And then there's reliability; there's nothing worse than a logging framework that's not logging anything because it's bugged. ;)
Take for example ELMAH for ASP.net applications. It also includes notifications, exports to various target formats, etc. Things that are pretty handy but you'll never build yourself unless you really need it.
How many changes to your code are needed obviously depends on both your code and the framework of choice. It's hard to say anything about that.
I am going to give a shout out to NLog (http://nlog-project.org/home) as it doesn't suffer from the 'Straight Java Port - then rewrite' syndrome of most oss .Net libs.
Some key benefits for us were the very fast Logger.IsFooEnabled (volatile read) and the overall performance of the system.
To each its own though, but I personally prefer NLog for my projects (and some of my clients too).
Cheers,
Florian
The advantage of using a good logging framework like Log4Net is that they have a small impact upon your code in terms of lines of code that you have to alter (in other words you only have to alter each existing logging line).
Also, if you are concerned about altering your code if you change frameworks, or if you feel you want to roll your own, then you could always create your own interface to a logging framework. Then you only ever have to change your code in one place after that.
I think sysadmins expect services to log to the application event log in windows.
Look up System.Diagnostics.EventLog, although log4net will write to that too..
The initial statement in the log4j website might help in some of your questions, the underlying principles are the same of log4net:
With log4j it is possible to enable
logging at runtime without modifying
the application binary. The log4j
package is designed so that these
statements can remain in shipped code
without incurring a heavy performance
cost. Logging behavior can be
controlled by editing a configuration
file, without touching the application
binary.
Using a logger hierarchy it is
possible to control which log
statements are output at arbitrarily
fine granularity but also great ease.
This helps reduce the volume of logged
output and minimize the cost of
logging.
In this case there's clearly no need to reinvent the wheel. Most Logging frameworks are somewhat straightforward, so the extend of changes will most likely depend on the size of your existing programs.
if you write your logger class properly it will be easily expendable to any of your needs. Any framework could impress you with many features but another framework is another variable in your debugging process as it can give you an error that does not exists or can make an error by itself in combination with your application. If you are ready to make beta testing for open source software project this is fine...
In your place i would write log class with ability to extend it features you find interesting to you project based on the list of features known frameworks have. I don't see any problem to log something to file and then send it over smpt, just one small function does the job.
Moreover, you can write your own class which will be pretty abstract and put your basic code in there, if you will ever need to use external framework for testing you class would be able to use it with minimal impact on code. Just take a look how there frameworks are implemented on the code level.
think of that you will need to learn how to properly use these frameworks when your only needs for now to log very small part of it...

Duplicate Functionality Amongst Multiple Projects

I'm currently working on two social networking sites that have a lot in common, yet are distinctively different. I find myself writing a lot of the same code for both (including UI), and was wondering if there is a best practice that will limit duplicating code.
One of the main problems is that these projects are very independent of eachother and will likely have more differences than similaries soon. Also, once the initial work is done, they might be handed off to other programmers, so having shared code libraries might end up being a big problem.
Any suggestions from people that might have had to deal with a similiar situation?
PS: I'm the only developer on both of these projects, and it looks like it's going to stay that way for a while.
Abstracting shared functionality back to a framework or library with defined interfaces and default implementations is a common way to handle this. For example, your plugin architecture, if you choose to support one, is probably something that could be shared among all of your projects. Most of the time the things you want to share are pretty basic functionality or relatively abstract functionality that can be easily customized. The former are easier to recognize and factor out to common libraries. The latter may sometimes be more work than simply re-implementing the code with minor changes (sharing patterns rather than code).
One thing you want to be careful of is to let the actual re-use drive the design of common libraries rather than coming up with a shared architecture in advance. It's very tempting to get caught up in framework design and abstracting it out for shared use. Unfortunately you often find that the shared use never develops or develops in a different direction than you expected and you end up rewriting or throwing away much of the framework -- or even worse, keeping and maintaining unused code. Let YAGNI (you aren't gonna need it) be your guide and delay refactoring to common libraries until you actually have a need.
There are a couple (at least) of different approaches here, and you could certainly use both. Firstly you could remove some common code in to a separate project and just call that code staticaly. This is pretty easy to do and I sometimes take this approach with simple helper functions that probably don't belong in a class in my main project - a good example would be a math library or something like that. The other approach is to extract common functionality in to a class or interface which you then inherit and extend. Depending on what code you are looking to reuse you might use either (or both) of these approaches.
I suspect you will find it easier than you think. Try it with some simple code, set up a new project in the same solution, reference your library from your existing code and see how it goes. There is also no reason not to reference your shared project in multiple solutions either.
Having shared code libraries need not be a problem if the development gets handed off. For now you can have your 2 sites reference the same library (or libraries) which you maintain, but if and when you split the projects out to other teams you can give a copy of the shared code to each team.

Categories

Resources