Object Auditing - c#

currently we have quite a chunky auditing system for objects within our application, the flow goes like this..
-Class implements an interface
-Interfaces forces the class to override some methods for adding properties that need auditing to a List of KeyValuePairs
-Class is then also needs to recreate the objects state from a list of key value pairs
Now the developer needs to add all this to there class, also our objects change quite often so we didn't just serialise the class.
What I would like to do is to use attributes to mark the properties as auditable and then do everything automatically so the developer doesn't really need to do anything.
My Main question is - I know people always say reflection is slow, how slow are we talking? what performance hits am I going to get from looking through the class and looking at attributes against a property and then doing any required logic?
thanks for any help
Ste,

It's hard to give a specific answer because it depends on what adequate performance is for your application.
Reflection is slower then normal compiled code but when worrying about performance problems it's always better to have something that works and then use profiling to find the real performance bottleneck and optimize.
Premature optimization could lead to code that's much harder to maintain so your developers will be less productive.
I would start with using reflection and write a good set of unit tests so you know your code is working. If performance turns out to be a problem you can use the Visual Studio profiler to profile your unit tests and discover the bottlenecks.
There are some libraries that can speed up reflection or you could use Expression trees to replace your reflection code if it's to slow.

If the performance ok or not depends on your app context. So it's difficult to say if it is slow or fast for you, you should try it by yourself.
Most probably, imo, it would give pretty acceptable performance, but again I have no idea where you gonna use it.
Like other solutions that come to my mind, could be:
Sqlite, where to save the key/value data
Aspect Oriented Programming (like a PostSharp) to generate a data in compile time.
But the first thing I would try, is a Reflection, just like you think.

Reading this response from Marc I would suggest that Reflection should be fine for most application needs.
Before making any fundamental changes I would suggest running a profiler to find the bottlenecks in your code. If you identify the reflection / auditing process is the major pain point use an IL Emit and try again.

Reflection is the way to go here. If it's too slow(measure!) you can throw in a bit of caching, or in the worst case generate an Expression<T> and compile it.
There are two phases in your problem:
Figure out which properties you want, and return a list of their PropertyInfos. You need to do this only once per type, and then you can cache it. Thus performance of this step doesn't matter.
Getting the value of each property with PropertyInfo.GetValue.
If this step 2 is too slow, you need to generate an Expression in step 1, and the overhead over manually written code goes down to a single delegate invocation.

Related

Are there significant performance gains inherent in using .NET's built in classes?

Quick little question...
I know that sometimes in other languages libraries have part of their code written in platform-specific straight C for performance reasons. In such cases you can get huge performance gains by using library code wherever possible.
So does the .NET platform do this? Is Microsoft's implementation of the Base Class Library optimized in some way that I can't hope to match in managed code?
What about something little like using KeyValuePair as a type-safe tuple struct instead of writing my own?
As far as I know, the .NET Framework hasn't been compiled in a way that creates hooks into some otherwise-inaccessible hardware acceleration or something like that, so for simple things like KeyValuePair and Tuple, you're probably safe rolling your own.
However, there are a number of other advantages to using standard framework classes, and I'd hesitate to write my own without a strong reason.
They're already written, so why give yourself extra work?
Microsoft has put their code through a pretty rigorous vetting process, so there's a good chance that their code will be more correct and more efficient than yours will.
Other developers that have to look at your code will know exactly what to expect when they see standard framework classes being used, whereas your home-brewed stuff might make them scratch their heads for a while.
Update
#gordy also makes a good point, that the standard framework classes are being used by everybody and their dog, so there will be a slight performance gain simply due to the fact that:
the class likely won't have to be statically instantiated or just-in-time compiled just for your code,
the class's instructions are more likely to already be loaded in a cache, since they'll likely have been used recently by other parts of code. By using them, you're less likely to need to load the code into a cache in the first place, and you're less likely to be kicking other code out of the cache that's likely to be used again soon.
I've wondered this myself but I suspect that it's not the case since you can "decompile" all of base libraries in Reflector.
There's probably still a performance advantage over homemade stuff in that the code is likely jitted already and cached.
I suggest you use built-in classes most of the time, UNLESS YOU'VE MEASURED IT'S NOT FAST ENOUGH.
I'm pretty sure MS put a lot of time and effort building something fast and reliable. It is totally possible you can beat them... after a few weeks of efforts. I just don't think it is worth the time most of the time.
The only time it seems ok to rewrite something is when it does not do all that you want. Just be aware of the time cost and the associated difficulty.
Could you ever hope to match the performance? Possibly, though keep in mind their code has been fully tested and extremely optimized, so I'd say it's not a worth-while effort unless you have a very specific need that there isn't a BCL type that directly fulfills.
And .NET 4.0 already has a good Tuple<> implementation. Though in previous versions of .NET you'd have to roll your own if you need anything bigger than a KeyValuePair.
The real performance gain comes from the fact that the MS team built and tested the library methods. You can rest assured with a very high degree of comfort that the objects will behave without introducing bugs.
Then there is the matter of re-inventing the wheel. You'd really have to have a great reason for doing so.
Main performance reasons always lay in architecture or complex algorithms, language is no matter.
Miscrosoft Base Class Library always comes with a complexity explanation for "heavy" methods. So you can easily decide use it, or find another "faster" algorithm to implement or use.
Of corse when it comes to heavy algorithms (graphics, archiving, etc.) then performance gains from going to lower level language come in handy.

Avoid or embrace C# constructs which break edit-and-continue?

I develop and maintain a large (500k+ LOC) WinForms app written in C# 2.0. It's multi-user and is currently deployed on about 15 machines. The development of the system is ongoing (can be thought of as a perpetual beta), and there's very little done to shield users from potential new bugs that might be introduced in a weekly build.
For this reason, among others, i've found myself becoming very reliant on edit-and-continue in the debugger. It helps not only with bug-hunting and bug-fixing, but in some cases with ongoing development as well. I find it extremely valuable to be able to execute newly-written code from within the context of a running application - there's no need to recompile and add a specific entry point to the new code (having to add dummy menu options, buttons, etc to the app and remembering to remove them before the next production build) - everything can be tried and tested in real-time without stopping the process.
I hold edit-and-continue in such high regard that I actively write code to be fully-compatible with it. For example, I avoid:
Anonymous methods and inline delegates (unless completely impossible to rewrite)
Generic methods (except in stable, unchanging utility code)
Targeting projects at 'Any CPU' (i.e. never executing in 64-bit)
Initializing fields at the point of declaration (initialisation is moved to the constructor)
Writing enumerator blocks that use yield (except in utility code)
Now, i'm fully aware that the new language features in C# 3 and 4 are largely incompatible with edit-and-continue (lambda expressions, LINQ, etc). This is one of the reasons why i've resisted moving the project up to a newer version of the Framework.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug? Is there legitimacy in this sort of development, or is it wasteful? Also, importantly, do any of these constructs (lambda expressions, anonymous methods, etc) incur performance/memory overheads that well-written, edit-and-continue-compatible code could avoid? ...or do the inner workings of the C# compiler make such advanced constructs run faster than manually-written, 'expanded' code?
Without wanting to sound trite - it is good practice to write unit/integration tests rather than rely on Edit-Continue.
That way, you expend the effort once, and every other time is 'free'...
Now I'm not suggesting you retrospectively write units for all your code; rather, each time you have to fix a bug, start by writing a test (or more commonly multiple tests) that proves the fix.
As #Dave Swersky mentions in the comments, Mchael Feathers' book, Working Effectively with Legacy Code is a good resource (It's legacy 5 minutes after you wrote it, right?)
So Yes, I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue; BUT I also think it's a mistake to embrace new constructs just for the sake of it, and especially if they lead to harder to understand code.
I love 'Edit and Continue'. I find it is a huge enabler for interactive development/debugging and I too find it quite annoying when it doesn't work.
If 'Edit and Continue' aids your development methodology then by all means make choices to facilitate it, keeping in mind the value of what you are giving up.
One of my pet peeves is that editing anything in a function with lambda expressions breaks 'Edit and Continue'. If I trip over it enough I may write out the lambda expression. I'm on the fence with lambda expressions. I can do some things quicker with them but they don't save me time if I end up writing them out later.
In my case, I avoid using lambda expressions when I don't really need to. If they get in the way I may wrap them in a function so that I can 'Edit and Continue' the code that uses them. If they are gratuitous I may write them out.
Your approach doesn't need to be black and white.
Wanted to clarify these things a bit
it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug?
Edit and Continue is not really debugging, it is developing. I make this distinction because the new C# features are very debuggable. Each version of the language adds debugging support for new language features to make them as easy as possible debug.
everything can be tried and tested in real-time without stopping the process.
This statement is misleading. It's possible with Edit and Continue to verify a change fixes a very specific issue. It's much harder to verify that the change is correct and doesn't break a host of other issues. Namely because edit and continue doesn't modify the binaries on disk and hence doesn't allow for items such as unit testing.
Overall though yes I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue. Edit and Continue is a great feature (really loved it when I first encountered it in my C++ days). But it's value as a production server helper doesn't make up for the producitivy gains from the new C# features IMHO.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug
I would argue that any time you are forcing yourself to write code that is:
Less expressive
Longer
Repeated (from avoiding generic methods)
Non-portable (never debug and test 64bit??!?!?)
You are adding to your overall maintenance cost far more than the loss of the "Edit and Continue" functionality in the debugger.
I would write the best code possible, not code that makes a feature of your IDE work.
While there is nothing inherently wrong with your approach, it does limit you to the amount of expressiveness understood by the IDE. Your code becomes a reflection of its capabilities, not the language's, and thus your overall value in the development world decreases because you are holding yourself back from learning other productivity-enhancing techniques. Avoiding LINQ in favor of Edit-and-Continue feels like an enormous opportunity cost to me personally, but the paradox is that you have to gain some experience with it before you can feel that way.
Also, as has been mentioned in other answers, unit-testing your code removes the need to run the entire application all the time, and thus solves your dilemma in a different way. If you can't right-click in your IDE and test just the 3 lines of code you care about, you're doing too much work during development already.
You should really introduce continues integration, which can help you to find and eliminate bugs before deploying software. Especially big projects (I consider 500k quite big) need some sort of validation.
http://www.codinghorror.com/blog/2006/02/revisiting-edit-and-continue.html
Regarding the specific question: Don't avoid these constructs and don't rely on your mad debugging skills - try to avoid bugs at all (in deployed software). Write unit tests instead.
I've also worked on very large permanent-beta projects.
I've used anonymous methods and inline delegates to keep some relatively simple bits of use-one logic close to their sole place of use.
I've used generic methods and classes for reuse and reliability.
I've initialised classes in constructors to as full an extent as possible, to maintain class invariants and eliminate the possibility of bugs caused by objects in invalid states.
I've used enumerator blocks to reduce the amount of code needed to create an enumerator class to a few lines.
All of these are useful in maintaining a large rapidly changing project in a reliable state.
If I can't edit-and-continue, I edit and start again. This costs me a few seconds most of the time, a couple of minutes in nasty cases. Its worth it for the hours that greater ability to reason about code and greater reliability through reuse saves me.
There's a lot I'll do to make it easier to find bugs, but not if it'll make it easier to have bugs too.
You could try Test Driven Development. I found it very useful to avoid using the debugger at all. You start from a new test (e.g. unit test), and then you only run this unit test to check your development - you don't need the whole application running all the time. And this means you don't need edit-and-continue!
I know that TDD is the current buzz-word, but it really works for me. If I need to use the debugger I take it as a personal failure :)
Relying on Edit and Cont. sounds as if there is very little time spent on designing new features, let alone unit tests. This I find to be bad because you probably end up doing a lot of debugging and bug fixing, and sometimes your bug fixes cause more bugs, right?
However, it's very hard to judge whether you should or should not use language features or not, because this also depends on many, many other factors : project reqs, release deadlines, team skills, cost of code manageability after refactoring, to name a few.
Hope this helps!
The issue you seem to be having is:
It takes too long to rebuild you app,
start it up again and get to the bit
of UI you are working on.
As everyone has said, Unit Tests will help reduce the number of times you have to run your app to find/fix bugs on none UI code; however they don’t help with issues like the layout of the UI.
In the past I have written a test app that will quickly load the UI I am working on and fill it with dummy data, so as to reduce the cycle time.
Separating out none UI code into other classes that can be tested with unit tests, will allow you do use all C# constructs in those classes. Then you can just limit the constructs in use in the UI code its self.
When I started writing lots of unit tests, my usage of “edit-and-continue” went down, I how hardly use it apart from UI code.

Is reflection really THAT slow that I shouldn't use it when it makes sense to? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How costly is .NET reflection?
The "elegant" solution to a problem I am having is to use attributes to associate a class and its properties with another's. The problem is, to convert it to the other, I'd have to use reflection. I am considering it for a server-side app that will be hosted on the cloud.
I've heard many rumblings of "reflection is slow, don't use it," how slow is slow? Is it so CPU intensive that it'll multiply my CPU time so much that I'll literally be paying for my decision to use reflection at the bottom of my architecture on the cloud?
Just in case you don't see the update on the original question: when you are reflecting to find all the types that support a certain attribute, you have a perfect opportunity to use caching. That means you don't have to use reflection more than once at runtime.
To answer the general question, reflection is slower than raw compiled method calls, but it's much, much faster than accessing a database or the file system, and practically all web servers do those things all the time.
It's many times faster than filesystem access.
It's many many times faster than database access across the network.
It's many many many times faster than sending an HTTP response to the browser.
Probably you won't even notice it. Always profile first before thinking about optimizations.
I've wondered the same thing; but it turns out that reflection isn't all that bad. I can't find the resources (I'll try to list them when I find them), but I think I remember reading that it was maybe 2x to 3x slower. 50% or 33% of fast is still fast.
Also, I under the hood ASP.net webforms and MVC do a bunch of reflection, so how slow can it really be?
EDIT
Here is one resource I remember reading: .Net Reflection and Performance
Hmm, I try to avoid reflection if I can, but if I have to create a solution, and reflection gives me an elegant way to solve problem at hand, I 'll hapily use reflection.
But, it must be told that I think reflection should not be used to do 'dirty tricks'. At this very moment, I'm also working on a solution where I use custom attributes to decorate some classes, and yes, I'll have to use reflection in order to know whether a class / property / whatever has been decorated by my custom attribute.
I also think it is a matter of 'how much do you make reflection calls' ?
If I can, I try to cache my results.
Like, in the solution where I'm working on: on application startup, I inspect certain types in a certain assembly, whether those types have been decorated with my attribute, and, I keep them in a dictionary.

Improve speed performances in C#

This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together:
Firstly: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization?
Secondly: When writing a program from scratch in C#, what are some good ways to greatly improve performance?
Please stay away from general optimization techniques unless they are C# specific.
This has previously been asked for Python, Perl, and Java.
Off the top of my head:
Replace non-generic variants of container classes by their generic counterparts
Cut down on boxing/unboxing. Specifically, use generics where possible and generally avoid passing value types as object.
For dialogs using many dynamic controls: suspend drawing until after inserting all controls by using SuspendLayout/ResumeLayout. This helps especially when using layout containers.
Unfortunately, relatively few optimisations are language specific. The basics apply across languages:
Measure performance against realistic loads
Have clearly-defined goals to guide you
Use a good profiler
Optimise architecture/design relatively early
Only micro-optimise when you've got a proven problem
When you've absolutely proved you need to micro-optimise, the profiler tends to make it obvious what to look for - things like avoiding boxing and virtual calls.
Oh, one thing I can think of which is .NET-specific: if you need to make a call frequently and are currently using reflection, convert those calls into delegates.
EDIT: The other answers suggesting using generics and StringBuilder etc are of course correct. I (probably wrongly) assumed that those optimisations were too "obvious" ;)
One simple thing is to ensure that your build configuration is set to "Release". This will enable optimizations and eliminate debugging information, making your executable smaller.
More info on MSDN if needed.
Use a decent quality profiler and determine where your bottlenecks are.
Then start asking how to improve performance.
Anyone who makes any blanket statements like 'avoid reflection' without understanding both your performance profile and your problem domain should be shot (or at least reeducated). And given the size of the .Net landscape it's pretty much meaningless to talk about C# optimization: are we talking about WinForms, ASP.Net, BizTalk, Workflow, SQL-CLR? Without the context even general guidelines may be at best a waste of time.
Consider also what you mean by 'speed it up' and 'improve performance'. Do you mean greater resource efficiency, or lower perceived wait time for an end user (assuming there is one)? These are very different problems to solve.
Given the forum I feel obliged to point out that there some quite good coverage on these topics in Code Complete. Not C# specific mind. But that's a good thing. Bear in mind the language-specific micro-optimisations might well be subsumed into the next version of whatever compiler you're using, And if the difference between for and foreach is a big deal to you you're probably writing C++ anyway, right?
[I liked RedGate's ANTS Profiler, but I think it could be bettered]
With that out the way, some thoughts:
Use type(SomeType) in preference to
instance.GetType() when possible
Use
foreach in preference to for
Avoid
boxing
Up to (I think) 3 strings
it's ok to do StringA + StringB +
StringC. After that you should use a
StringBuilder
Use StringBuilder rather than lots of string concatenation. String objects are atomic, and any modification (appending, to-upper, padding, etc) actually generate a completely new string object rather than modifying the original. Each new string must be allocated and eventually garbage collected.
A generalization of the prior statement: Try to reuse objects rather than creating lots and lots of them. Allocation and garbage collection may be easy to do, but they hit your performance.
Be sure to use the provided Microsoft libraries for most things. The classes provided by the Framework often use features that are unavailable or difficult to access from your own C# code (i.e. making calls out to the native Windows API). The built-in libraries are not always the most efficient, but more often than not.
Writing asynchronous apps has never been easier. Look into things like the BackgroundWorker class.
Try not to define Structs unless you really need them. Class instance variables each hold a reference to the actual instance, while struct instance variables each hold a separate copy.
Use composition instead of inheritance, limit boxing/unboxing, use generic collections, use foreach loops instead of for{} with a counter, and release resources with the standard Dispose pattern.
These are covered in detail in the excellent book Effective C#.
Profile your code. Then you can at least have an understanding of where you can improve. Without profiling you are shooting in the dark...
A lot of slowness is related to database access. Make your database queries efficient and you'll do a lot for your app.
I have the following MSDN article bookmarked, and find it to be a good general reference.
Improving .NET application pefformance
NGEN will help with some code, but do not bank on it.
Personally, if your design is bad/slow, there is not much you can do.
The best suggestion in such a case, is to implement some form of caching of expensive tasks.
I recomend you those books:
Effective C#.
More Effective C#
Don't use to much reflection.
Use Ngen.exe (Should come shipped with Visual Studio.)
http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly.
In addition to the coding best practices listed above including using StringBuilders when appropriate and items of that nature.
I highly recommend using a code profiling tool such as ANTs Profiler by RedGate. I have found that after taking the standard steps for optimization that using Profiler I can further optimize my code by quickly identifying the area(s) of code that are most heavily hit by the application.
This is true for any language, not just C#
For an existing app, don't do anything until you know what's making it slow. IMHO, this is the best way.
For new apps, the problem is how programmers are taught. They are taught to make mountains out of molehills. After you've optimized a few apps using this you will be familiar with the problem of what I call "galloping generality" - layer upon layer of "abstraction", rather than simply asking what the problem requires. The best you can hope for is to run along after them, telling them what the performance problems are that they've just put in, so they can take them out as they go along.
Caching items that result from a query:
private Item _myResult;
public Item Result
{
get
{
if (_myResult == null)
{
_myResult = Database.DoQueryForResult();
}
return _myResult;
}
}
Its a basic technique that is frequently overlooked by starting programmers, and one of the EASIEST ways to improve performance in an application.
Answer ported from a question that was ruled a dupe of this one.
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI:
protected override System.Windows.Forms.CreateParams CreateParams {
get {
CreateParams cp = base.CreateParams;
cp.ExStyle = cp.ExStyle | 0x2000000;
return cp;
}
}

How costly is .NET reflection?

I constantly hear how bad reflection is to use. While I generally avoid reflection and rarely find situations where it is impossible to solve my problem without it, I was wondering...
For those who have used reflection in applications, have you measured performance hits and, is it really so bad?
In his talk The Performance of Everyday Things, Jeff Richter shows that calling a method by reflection is about 1000 times slower than calling it normally.
Jeff's tip: if you need to call the method multiple times, use reflection once to find it, then assign it to a delegate, and then call the delegate.
It is. But that depends on what you're trying to do.
I use reflection to dynamically load assemblies (plugins) and its performance "penalty" is not a problem, since the operation is something I do during startup of the application.
However, if you're reflecting inside a series of nested loops with reflection calls on each, I'd say you should revisit your code :)
For "a couple of time" operations, reflection is perfectly acceptable and you won't notice any delay or problem with it. It's a very powerful mechanism and it is even used by .NET, so I don't see why you shouldn't give it a try.
Reflection performance will depend on the implementation (repetitive calls should be cached eg: entity.GetType().GetProperty("PropName")). Since most of the reflection I see on a day to day basis is used to populate entities from data readers or other repository type structures I decided to benchmark performance specifically on reflection when it is used to get or set an objects properties.
I devised a test which I think is fair since it caches all the repeating calls and only times the actual SetValue or GetValue call. All the source code for the performance test is in bitbucket at: https://bitbucket.org/grenade/accessortest. Scrutiny is welcome and encouraged.
The conclusion I have come to is that it isn't practical and doesn't provide noticeable performance improvements to remove reflection in a data access layer that is returning less than 100,000 rows at a time when the reflection implementation is done well.
The graph above demonstrates the output of my little benchmark and shows that mechanisms that outperform reflection, only do so noticeably after the 100,000 cycles mark. Most DALs only return several hundred or perhaps thousands of rows at a time and at these levels reflection performs just fine.
If you're not in a loop, don't worry about it.
Not massively. I've never had an issue with it in desktop development unless, as Martin states, you're using it in a silly location. I've heard a lot of people have utterly irrational fears about its performance in desktop development.
In the Compact Framework (which I'm usually in) though, it's pretty much anathema and should be avoided like the plague in most cases. I can still get away with using it infrequently, but I have to be really careful with its application which is way less fun. :(
My most pertinent experience was writing code to compare any two data entities of the same type in a large object model property-wise. Got it working, tried it, ran like a dog, obviously.
I was despondent, then overnight realised that wihout changing the logic, I could use the same algorithm to auto-generate methods for doing the comparison but statically accessing the properties. It took no time at all to adapt the code for this purpose and I had the ability to do deep property-wise comparison of entities with static code that could be updated at the click of a button whenever the object model changed.
My point being: In conversations with colleagues since I have several times pointed out that their use of reflection could be to autogenerate code to compile rather than perform runtime operations and this is often worth considering.
It's bad enough that you have to be worried even about reflection done internally by the .NET libraries for performance-critical code.
The following example is obsolete - true at the time (2008), but long ago fixed in more recent CLR versions. Reflection in general is still a somewhat costly thing, though!
Case in point: You should never use a member declared as "Object" in a lock (C#) / SyncLock (VB.NET) statement in high-performance code. Why? Because the CLR can't lock on a value type, which means that it has to do a run-time reflection type check to see whether or not your Object is actually a value type instead of a reference type.
As with all things in programming you have to balance performance cost with with any benefit gained. Reflection is an invaluable tool when used with care. I created a O/R mapping library in C# which used reflection to do the bindings. This worked fantastically well. Most of the reflection code was only executed once, so any performance hit was quite small, but the benefits were great. If I were writing a new fandangled sorting algorithm, I would probably not use reflection, since it would probably scale poorly.
I appreciate that I haven't exactly answered your question here. My point is that it doesn't really matter. Use reflection where appropriate. It's just another language feature that you need to learn how and when to use.
Reflection can have noticeable impact on performance if you use it for frequent object creation. I've developed application based on Composite UI Application Block which is relying on reflection heavily. There was a noticeable performance degradation related with objects creation via reflection.
However in most cases there are no problems with reflection usage. If your only need is to inspect some assembly I would recommend Mono.Cecil which is very lightweight and fast
Reflection is costly because of the many checks the runtime must make whenever you make a request for a method that matches a list of parameters. Somewhere deep inside, code exists that loops over all methods for a type, verifies its visibility, checks the return type and also checks the type of each and every parameter. All of this stuff costs time.
When you execute that method internally theres some code that does stuff like checking you passed a compatible list of parameters before executing the actual target method.
If possible it is always recommended that one caches the method handle if one is going to continually reuse it in the future. Like all good programming tips, it often makes sense to avoid repeating oneself. In this case it would be wasteful to continually lookup the method with certain parameters and then execute it each and everytime.
Poke around the source and take a look at whats being done.
As with everything, it's all about assessing the situation. In DotNetNuke there's a fairly core component called FillObject that uses reflection to populate objects from datarows.
This is a fairly common scenario and there's an article on MSDN, Using Reflection to Bind Business Objects to ASP.NET Form Controls that covers the performance issues.
Performance aside, one thing I don't like about using reflection in that particular scenario is that it tends to reduce the ability to understand the code at a quick glance which for me doesn't seem worth the effort when you consider you also lose compile time safety as opposed to strongly typed datasets or something like LINQ to SQL.
Reflection does not drastically slow the performance of your app. You may be able to do certain things quicker by not using reflection, but if Reflection is the easiest way to achieve some functionality, then use it. You can always refactor you code away from Reflection if it becomes a perf problem.
I think you will find that the answer is, it depends. It's not a big deal if you want to put it in your task-list application. It is a big deal if you want to put it in Facebook's persistence library.

Categories

Resources