AOP performance overhead - c#

I've been searching a bit for some performance tests about typical AOP tasks. I've not been able to find any though, could you help me?
I'm mostly thinking about Castle, Unity and perhaps PostSharp, even though it might be too expensive for my project.

I haven't seen any quantitative comparisons, too, so this answer is probably far from being complete.
It is difficult to compare performance of Castle or Unity with PostSharp - Castle and Unity use runtime weaving by dynamic proxying and PostSharp adds overhead at compile stage. So if performance is crucial for you, compiled solutions like PostSharp will always be better. Generating AOP proxies in runtime means dynamic generating IL code and heavy reflection use.
So performance tests that could make sense have to compare solutions using the same technique - you can try to compare Castle Dynamic Proxy and Unity Interception proxy implementation.
I don't know the former well, but in case of latter, there are still three different scenarios to compare - transparent proxies (MarshalByRefObject), interface proxies and subclassing proxies - each with its own set of usage scenarios and its own performance overheads. From what I've read, transparent proxy is horribly slow and shouldn't be used in AOP scenarios. Interface and subtyping proxies generates some IL on the fly and this is the same what Castle DP does so I believe the differences shouldn't be so big (but again, no quantitative results here).

If you are looking for a light weight AOP tool, there is a article "Add Aspects to Object Using Dynamic Decorator" (http://www.codeproject.com/KB/architecture/aspectddecorator.aspx). It is thin and flexible.
It describes an approach to adding aspects to object at runtime instead of adding aspects to class at design time. The advantage of this approach is that you decide if you need an aspect when you use an object.
Most of today's AOP tools define aspects at class level at class design time. And you don't have the flexibility when you use an object of the classes.

If performance is crutial in your project, make sure your usage of the AOP is performance oriented because the overhead of an AOP Framework is rarely poor unless the usage is not compliant.
For example, if you use DynamicProxy, you have the choice to call the backing treatment using reflection or calling Proceed() method. It alter differently performance.
Another example : most of AOP Framework give you the MethodInfo to your "advice". The way how they get this metedata can alter your performance because GetMethodFromHandle can be very bad in extremely concurrency treatments (dictionary access with a lock).
Another important thing to keep in mind : use the adapted overlod for Advice method because if AOP Framework have to prepare too much informations (argument, methodinfo, ...) You will pay it (performance overhead). Unfortunely, sometimes there is no good user end interface to implement a performant advice event if interception was perfect.
For more details, in the post when-is-aop-code-executed, I give my feedback about performance issue of AOP Framework.

Related

AOP Vs Meta-Programming

Is there any difference between AOP and Meta-Programming?
Can we say that meta-programming techniques(IL-weaving, dynamic sub-classing etc.) are mechanisms to achieve AOP which is more about separation of cross-cutting concerns from main application code concerned with actual business requirements?
As I see it, metaprogramming is just a way to make AOP work without explicit support for it.
AOP could be implemented without metaprogramming, if your platform specifically supported it. And metaprogramming can be used for many other things than just AOP.

unit testing legacy code: limits of "extract and override" vs JustMock/TypeMock/moles?

Given the following conditions:
a very old, big, C# legacy code base with no testcoverage whatsoever
(almost) every class derives from some interface
nothing is sealed
What are the practical benefits of using profiler-API-driven solutions like JustMock and TypeMock, compared to using extract&override + e.g. RhinoMocks? Are there cases I'm not aware of, besides circumventing private/protected, where using TypeMock/JustMock etc. is really needed? I'd especially welcome some experience from people having switched to one of the products.
Using extract&override seems to solve all problems when handling old legacy code, the refactoring seems dead simple, and the possibility for introducing bugs seems very minor. Is the benefit writing less test code? More beautifull classes with less virtual protected stuff? Right now, I don't 'get it', although I understand it's very helpfull to first test private methods in isolation, as public methods may be too large under the hood in such old legacy codebases.
If you don't know what extract&override is: see here.
There are many differences between the frameworks which do not regard the technology on which the frameworks built on.
For example:
API - every framework has different notations and defaults (e.g.
strict defaults vs. relaxed defaults)
Support - the propriety frameworks usually offer support with the licenses
Price - this is not a matter of usage but requires budget
The main advantage of Extract&Override is that it requires some refactoring, if the code you're working on is neglected, it's gives a good chance to go over it and refactor it toward better code and not just for testability.
The main advantage of using an Isolation framework is that you do not need to change the code under test (if it's a large codebase it could take long time just to refactor it for testability). In addition, the Isolation frameworks do not force you into specific design, this could be helpful if the legacy code matches better its existing design. Another feature which is useful in legacy code is swapping instances created in the code under test, usually refactoring instantiations takes more effort and this can be saved. Last thing is faking 3rd party code - using isolation frameworks you can isolate code which is not yours without using wrapper classes.
Disclaimer - I work at Typemock

Is the Non-Virtual Interface (NVI) idiom as useful in C# as in C++?

In C++, I often needed NVI to get consistency in my APIs. I don't see it used as much among others in C#, though. I wonder if that is because C#, as a language, offers features that makes NVI unnecessary? (I still use NVI in C#, though, where needed.)
C# poses a problem with NVIs by taking away multiple inheritance. While I do think that multiple inheritance generates more evil than good, it is necessary (in most cases) for NVI. The simplest thing that jumps to mind: a class in C# cannot implement more than one NVI. Once one discovers this unpleasant aspect of C#/NVI tandem, it becomes much easier to give up NVIs than C#.
And by the way, speaking about aspects. That's a very interesting concept, and it's aim is exactly the same as that of NVIs, only it attempts to look at the "true essense" of the issue and address it "properly", so to say. Take a look.
And as far as .NET Framework goes, there is a mechanism to do just that: inject code that is "orthogonal", so to say, to the main logic at hand. I'm talking about all that MarshalByRef/TransparentProxy business, I'm sure you've heard of it. It does seriously impact performance, though, so no big luck here.
There have also been numerous attempts to implement the same concept through other techniques, from building facades to the dirty business mentioned above to post-processing of MSIL.
The latter approach happens to appeal to yours truly the most, since it can be made transparent (by incorporating needed steps into one's build routine), it doesn't affect performance (more than is absolutely necessary to actually execute the "orthogonal" code) and it does not involve some kind of "hacking" or reverse engineering, since MSIL is open and well documented.
Here one can find these points discussed in more detail, as well as more information and links to actual tools. Using Google for the same purpose is also acceptable. :-)
Good luck.
I think the explanation is simply that in C#, "traditional" Java-style OOP is much more ingrained, and NVI runs counter to that. C# has a real interface type, whereas NVI relies on the "interface" actually being a base class. That's how it's done in C++ anyway, so it fits naturally there.
In C#, it can still be done, and it is still a very useful idiom (far more so, I'd say, than "normal" interfaces), but it requires you to ignore a built-in language feature.
Many C# programmers just wouldn't think of a NVI class as being "a proper interface". I think this mental resistance is the only reason why it's less common in C#.
Trey Nash in his book Accelerated C# promotes the NVI pattern as a canonical form in C#.
I don't know who wrote the article you reference (More C++ Idioms/Non-Virtual Interface), but I feel the author missed the point.
...
Interface vs Abstract classes
I'd argue that, philosophically, there's little difference (in C#) between a fully abstract class (ie no implementation whatsoever) versus an interface. On the surface, they both can provide a signature of methods that can be performed and require something else to implement that functionality.
With C# you would always program to an interface if what you need is an interface. You only use an (abstract) base class because you also want implementation reuse as well.
Many code bases combine these and program to the interface in addition to providing a class hierarchy as a default implementation for the interface.
NVI for Interfaces in C
If your only motivation to use NVI in C++ would be to have an interface, then no, you're not going to use this in C# because the language / CLR provides interfaces as a first-class feature.
NVI and object hierarchies
In my mind, NVI has never been about interfaces. It's always been an excellent way to implement the template method pattern.
The usefulness manifests itself in code lifecycle maintenance (ease of change, extension, etc), and provides a simpler model of inheritance.
My opinion: Yes, NVI is very useful in C#.
I think NVI is as useful in C# as it is in C++. I see it used very frequently at my company.

Reasons for using a DLR-based language rather than C# for scripting tasks?

I'm considering embedding a scripting language into one of my software projects and have identified two options: compiling C# at run-time via CodeDOM and embedding a DLR-based scripting language. Both options would give me full access to the .NET Framework.
The operation that I'd be scripting would be a user-defined transformation of a DataRow and a set of metadata resulting in a modified DataRow. I expect these transforms will be composable and frequently invoked. Of course, I expect the transforms to be provided and modifiable by the end-user.
With this workload in mind, are there any clear advantages to using one approach over another?
For users it's usually better to use languages with a more forgiving syntax, for obvious reasons. So I would recommend using a DLR based-language. If you have the time and resources, a specialized DSL is the best choice, because you can offer a small and easy-to-learn syntax, and it's easier to keep the user from doing things they should not be doing (like accessing the filesystem, for instance...)
I can't speak from experience, but, from what I've seen, the DLR can be quite fast (IronPython does better than native Python!). But dynamic dispatch always entails a slight overhead. On the gripping hand, cross-AppDomain calls are pretty expensive. While the dynamic dispatch cost is paid everywhere inside the script, the cross-AppDomain cost is paid only once per script call. Which one does better depends on how much your scripts will do.
Embedding a DLR Scripting Host is not difficult at all. What's difficult is to roll your own DSL, if you choose to go that way.
You could also look into boo. It's a static CLI language that looks like Python, thanks to type inference. Its compiler is highly extensible, and I've had some success writing some small DSLs on it. You could also look into Oren's book Writing DSLs with boo.

Applying Aspect Oriented Programming

I've been using some basic AOP style solutions for cross-cutting concerns like security, logging, validation, etc. My solution has revolved around Castle Windsor and DynamicProxy because I can apply everything using a Boo based DSL and keep my code clean of Attributes. I was told at the weekend to have a look at PostSharp as it's supposed to be a "better" solution. I've had a quick look at PostSharp, but I've been put off by the Attribute usage.
Has anyone tried both solutions and would care to share their experiences?
Couple of minor issues with PostSharp...
One issue I've had with PostSharp is that whilst using asp.net, line numbers for exception messages are 'out' by the number of IL instructions injected into asssemblies by PostSharp as the PDBs aren't injected as well :-).
Also, without the PostSharp assemblies available at runtime, runtime errors occur. Using Windsor, the cross-cuts can be turned off at a later date without a recompile of code.
(hope this makes sense)
I only looked at castle-windsor for a short time (yet) so I can't comment on that but I did use postsharp.
Postsharp works by weaving at compile time. It ads a post-compile step to your build where it modifies your code. The code is compiled as if you just programmed the cross cutting concerns into you code. This is a bit more performant than runtime weaving and because of the use of attributes Postsharp is very easy to use. I think using attributes for AOP isn't as problematic as using it for DI. But that's just my personal taste.
But...
If you already use castle for dependency injection I don't see a good reason why you shouldn't also use it for AOP stuff. I think though the AOP at runtime is a bit slower than at compile time it's also more powerful. AOP and DI are in my opinion related concepts so I think it's a good idea to use one framework for both. So I'll probably look at the castle stuff again next project I need AOP.

Categories

Resources