I have a method that is only accessible if a certain criteria is fulfilled, if it's not, then the method won't be executed. Currently, this is how I code the thing:
public void CanAccessDatabase()
{
if(StaticClass.IsEligible())
{
return;
}
// do the logic
}
Now, this code is ugly because out of no where there is this if(StaticClass.IsEligible()) condition that is not relevant to the concern of the method.
So I am thinking about putting the IsEligible method in the attribute, so that my code will look like this. If the condition is not fulfilled, then this method will just return without executing the logic below.
[IsEligibleCheck]
public void CanAccessDatabase()
{
// do the logic
}
Eligibility is a runtime decision, of course.
Any idea on how to code up the logic for IsEligibleCheck ? Thanks
Edit: I know PostSharp can do this, but I am looking at something that works out of box, not depending on any third party library.
Any idea on how to code up the logic for IsEligibleCheck?
This is a perfect spot for AOP.
Edit: I know PostSharp can do this, but I am looking at something that works out of box, not depending on any third-party library.
Is Microsoft considered third-party? If not, you could look at Unity from their Patterns & Practices team. Look at the Interceptor mechanism in Unity.
Otherwise, you effectively have to roll your own implementation using reflection. Effectively what you have to do is wrap your objects in a proxy wherein the proxy uses reflection to check the attributes and interpret them appropriately. If IsEligibleCheck succeeds then the proxy invokes the method on the wrapped object. Really, it's easier to just reuse an already existing implementation.
My advice is just use Unity (or another AOP solution).
Unfortunately, attributes doesn't get executed at runtime. A handful of built-in attributes modify the code that gets compiled, like the MethodImpl attributes and similar, but all custom attributes are just metadata. If no code goes looking for the metadata, it will sit there and not impact the execution of your program at all.
In other words, you need that if-statement somewhere.
Unless you can use a tool like PostSharp, then you cannot get this done in out-of-the box .NET, without explicit checks for the attributes.
This looks like a perfect candidate for AOP. In a nutshell, this means that the CanAccessDatabase logic will live in an "aspect" or "interceptor", that is, separate from the business logic, thus achieving separation of concerns (the aspect is only responsible for security, business code is only responsible for business things).
In C#, two popular options for doing AOP are Castle.DynamicProxy and PostSharp. Each has its pros and cons. This question sums up their differences.
Here are other options for doing AOP in .Net, some of them can be done without 3rd-party libraries. I still recommend using either DynamicProxy, PostSharp, LinFu, Spring.AOP or Unity, other solutions are not nearly as flexible.
Custom attributes go hand in hand with Reflection.
You will need to create another class that is responsible for calling the methods in your CanAccessDatabase() class.
Using reflection, this new class will determine the attributes on each method. If the IsEligibleCheck attribute is found, it will perform the StatiClass.IsEligible() check and only call CanAccessDatabase() if the check passes.
Heres an introduction to doing this at MSDN. It revolves around using the MemberInfo.GetCustomAttributes() method.
Heres the pseudocode:
Get the Type of the CanAccessDatabase() class
Using this type, get all methods in this class (optionally filtering public, private etc).
Loop through the list of methods
Call GetCustomAttributes() on the current method.
Loop through the list of custom attributes
If the IsEligibleCheck attribute is found
If StaticClass.IsEligible is true
Call the current method (using MethodInfo.Invoke())
End If
End If
End Loop
End Loop
I know this is an old thread...
You can use the Conditional Attribute: http://msdn.microsoft.com/en-us/library/system.diagnostics.conditionalattribute.aspx
"Indicates to compilers that a method call or attribute should be ignored unless a specified conditional compilation symbol is defined."
#define IsEligibleCheck // or define elsewhere
[Conditional("IsEligibleCheck")]
public void CanAccessDatabase()
{
// do the logic
}
check AOP that will help you a lot in this, one of the powerful components in the market is PostSharp
Related
Is there a better way to log or add a method calls to a series of functions without explicitly adding in the function calls?
For example I have a class like:
public class MyClass
{
public void DoStuff()
{
doSomethingEnter();
//MyCode
doSomethingExit();
}
public void DoStuff_2()
{
doSomethingEnter();
//MyCode
doSomethingExit();
}
}
doSomethingEnter() and doSomethingExit() can be logging calls or events calling other things that need to occur on function start and end.
I have read on Aspect Oriented Programming that allows me to do something like this:
public class MyClass
{
[DoSomethingEnterExit]
public void DoStuff()
{
//MyCode
}
}
But is there a pattern or framework that allows me to do something similar without having to buy an AOP framework like PostSharp?
Thanks.
EDIT
I have also looked into doing sort of a facade to make sure stuff always gets called with a master method that has the entry and exit functions.
public class MyFacade
{
public void MasterMethod(string methodName)
{
doSomethingEnter();
if(methodName == "DoStuff")
{
MyClass.DoStuff();
}
else
{
MyClass.DoStuff_2();
}
doSomethingExit();
}
}
Use AOP (Aspect Oriented Programming) to inject logging into your code transparently at compile time.
PostSharp is an AOP library for .Net and it uses reflection Attributes to allow you to put markers (aspects) in your code, then a pre-compiler locates those markers and replaces them with the actual code.
Disclosure: I'm not affiliated with PostSharp, but have used it before in some of my projects.
Edit:
As far as I remember, there is still a free and open source version of PostSharp that will work for the logging scenario.
I haven't heard of other AOP libraries in .Net ecosystem.
Castle Windsor offers interceptors that will do this for you. If you don't want to use an entire IoC framework just for this, you could go straight to DynamicProxy to achieve the same thing. However, having an IoC container handle the wiring is nice and neat.
You could also consider using the Decorator pattern. However, this often results in significant proliferation of code if the concerns are many.
A drawback to consider when using interception or decoration is that the additional concerns each pattern adds are transparent. This is great for reducing the noise but can be difficult to debug or enhance if those supporting the code are unaware or unfamiliar with the concepts.
An additional drawback, as highlighted by #Bishoy in the comments below, is that these are run-time solutions that may have an impact on performance that the 'static' approaches do not.
I'm writing a library that has several public classes and methods, as well as several private or internal classes and methods that the library itself uses.
In the public methods I have a null check and a throw like this:
public int DoSomething(int number)
{
if (number == null)
{
throw new ArgumentNullException(nameof(number));
}
}
But then this got me thinking, to what level should I be adding parameter null checks to methods? Do I also start adding them to private methods? Should I only do it for public methods?
Ultimately, there isn't a uniform consensus on this. So instead of giving a yes or no answer, I'll try to list the considerations for making this decision:
Null checks bloat your code. If your procedures are concise, the null guards at the beginning of them may form a significant part of the overall size of the procedure, without expressing the purpose or behaviour of that procedure.
Null checks expressively state a precondition. If a method is going to fail when one of the values is null, having a null check at the top is a good way to demonstrate this to a casual reader without them having to hunt for where it's dereferenced. To improve this, people often use helper methods with names like Guard.AgainstNull, instead of having to write the check each time.
Checks in private methods are untestable. By introducing a branch in your code which you have no way of fully traversing, you make it impossible to fully test that method. This conflicts with the point of view that tests document the behaviour of a class, and that that class's code exists to provide that behaviour.
The severity of letting a null through depends on the situation. Often, if a null does get into the method, it'll be dereferenced a few lines later and you'll get a NullReferenceException. This really isn't much less clear than throwing an ArgumentNullException. On the other hand, if that reference is passed around quite a bit before being dereferenced, or if throwing an NRE will leave things in a messy state, then throwing early is much more important.
Some libraries, like .NET's Code Contracts, allow a degree of static analysis, which can add an extra benefit to your checks.
If you're working on a project with others, there may be existing team or project standards covering this.
If you're not a library developer, don't be defensive in your code
Write unit tests instead
In fact, even if you're developing a library, throwing is most of the time: BAD
1. Testing null on int must never be done in c# :
It raises a warning CS4072, because it's always false.
2. Throwing an Exception means it's exceptional: abnormal and rare.
It should never raise in production code. Especially because exception stack trace traversal can be a cpu intensive task. And you'll never be sure where the exception will be caught, if it's caught and logged or just simply silently ignored (after killing one of your background thread) because you don't control the user code. There is no "checked exception" in c# (like in java) which means you never know - if it's not well documented - what exceptions a given method could raise. By the way, that kind of documentation must be kept in sync with the code which is not always easy to do (increase maintenance costs).
3. Exceptions increases maintenance costs.
As exceptions are thrown at runtime and under certain conditions, they could be detected really late in the development process. As you may already know, the later an error is detected in the development process, the more expensive the fix will be. I've even seen exception raising code made its way to production code and not raise for a week, only for raising every day hereafter (killing the production. oops!).
4. Throwing on invalid input means you don't control input.
It's the case for public methods of libraries. However if you can check it at compile time with another type (for example a non nullable type like int) then it's the way to go. And of course, as they are public, it's their responsibility to check for input.
Imagine the user who uses what he thinks as valid data and then by a side effect, a method deep in the stack trace trows a ArgumentNullException.
What will be his reaction?
How can he cope with that?
Will it be easy for you to provide an explanation message ?
5. Private and internal methods should never ever throw exceptions related to their input.
You may throw exceptions in your code because an external component (maybe Database, a file or else) is misbehaving and you can't guarantee that your library will continue to run correctly in its current state.
Making a method public doesn't mean that it should (only that it can) be called from outside of your library (Look at Public versus Published from Martin Fowler). Use IOC, interfaces, factories and publish only what's needed by the user, while making the whole library classes available for unit testing. (Or you can use the InternalsVisibleTo mechanism).
6. Throwing exceptions without any explanation message is making fun of the user
No need to remind what feelings one can have when a tool is broken, without having any clue on how to fix it. Yes, I know. You comes to SO and ask a question...
7. Invalid input means it breaks your code
If your code can produce a valid output with the value then it's not invalid and your code should manage it. Add a unit test to test this value.
8. Think in user terms:
Do you like when a library you use throws exceptions for smashing your face ? Like: "Hey, it's invalid, you should have known that!"
Even if from your point of view - with your knowledge of the library internals, the input is invalid, how you can explain it to the user (be kind and polite):
Clear documentation (in Xml doc and an architecture summary may help).
Publish the xml doc with the library.
Clear error explanation in the exception if any.
Give the choice :
Look at Dictionary class, what do you prefer? what call do you think is the fastest ? What call can raises exception ?
Dictionary<string, string> dictionary = new Dictionary<string, string>();
string res;
dictionary.TryGetValue("key", out res);
or
var other = dictionary["key"];
9. Why not using Code Contracts ?
It's an elegant way to avoid the ugly if then throw and isolate the contract from the implementation, permitting to reuse the contract for different implementations at the same time. You can even publish the contract to your library user to further explain him how to use the library.
As a conclusion, even if you can easily use throw, even if you can experience exceptions raising when you use .Net Framework, that doesn't mean it could be used without caution.
Here are my opinions:
General Cases
Generally speaking, it is better to check for any invalid inputs before you process them in a method for robustness reason - be it private, protected, internal, protected internal, or public methods. Although there are some performance costs paid for this approach, in most cases, this is worth doing rather than paying more time to debug and to patch the codes later.
Strictly Speaking, however...
Strictly speaking, however, it is not always needed to do so. Some methods, usually private ones, can be left without any input checking provided that you have full guarantee that there isn't single call for the method with invalid inputs. This may give you some performance benefit, especially if the method is called frequently to do some basic computation/action. For such cases, doing checking for input validity may impair the performance significantly.
Public Methods
Now the public method is trickier. This is because, more strictly speaking, although the access modifier alone can tell who can use the methods, it cannot tell who will use the methods. More over, it also cannot tell how the methods are going to be used (that is, whether the methods are going to be called with invalid inputs in the given scopes or not).
The Ultimate Determining Factor
Although access modifiers for methods in the code can hint on how to use the methods, ultimately, it is humans who will use the methods, and it is up to the humans how they are going to use them and with what inputs. Thus, in some rare cases, it is possible to have a public method which is only called in some private scope and in that private scope, the inputs for the public methods are guaranteed to be valid before the public method is called.
In such cases then, even the access modifier is public, there isn't any real need to check for invalid inputs, except for robust design reason. And why is this so? Because there are humans who know completely when and how the methods shall be called!
Here we can see, there is no guarantee either that public method always require checking for invalid inputs. And if this is true for public methods, it must also be true for protected, internal, protected internal, and private methods as well.
Conclusions
So, in conclusion, we can say a couple of things to help us making decisions:
Generally, it is better to have checks for any invalid inputs for robust design reason, provided that performance is not at stake. This is true for any type of access modifiers.
The invalid inputs check could be skipped if performance gain could be significantly improved by doing so, provided that it can also be guaranteed that the scope where the methods are called are always giving the methods valid inputs.
private method is usually where we skip such checking, but there is no guarantee that we cannot do that for public method as well
Humans are the ones who ultimately use the methods. Regardless of how the access modifiers can hint the use of the methods, how the methods are actually used and called depend on the coders. Thus, we can only say about general/good practice, without restricting it to be the only way of doing it.
The public interface of your library deserves tight checking of preconditions, because you should expect the users of your library to make mistakes and violate the preconditions by accident. Help them understand what is going on in your library.
The private methods in your library do not require such runtime checking because you call them yourself. You are in full control of what you are passing. If you want to add checks because you are afraid to mess up, then use asserts. They will catch your own mistakes, but do not impede performance during runtime.
Though you tagged language-agnostic, it seems to me that it probably doesn't exist a general response.
Notably, in your example you hinted the argument: so with a language accepting hinting it'll fire an error as soon as entering the function, before you can take any action.
In such a case, the only solution is to have checked the argument before calling your function... but since you're writing a library, that cannot have sense!
In the other hand, with no hinting, it remains realistic to check inside the function.
So at this step of the reflexion, I'd already suggest to give up hinting.
Now let's go back to your precise question: to what level should it be checked?
For a given data piece it'd happen only at the highest level where it can "enter" (may be several occurrences for the same data), so logically it'd concern only public methods.
That's for the theory. But maybe you plan a huge, complex, library so it might be not easy to ensure having certainty about registering all "entry points".
In this case, I'd suggest the opposite: consider to merely apply your controls everywhere, then only omit it where you clearly see it's duplicate.
Hope this helps.
In my opinion you should ALWAYS check for "invalid" data - independent whether it is a private or public method.
Looked from the other way... why should you be able to work with something invalid just because the method is private? Doesn't make sense, right? Always try to use defensive programming and you will be happier in life ;-)
This is a question of preference. But consider instead why are you checking for null or rather checking for valid input. It's probably because you want to let the consumer of your library to know when he/she is using it incorrectly.
Let's imagine that we have implemented a class PersonList in a library. This list can only contain objects of the type Person. We have also on our PersonList implemented some operations and therefore we do not want it to contain any null values.
Consider the two following implementations of the Add method for this list:
Implementation 1
public void Add(Person item)
{
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Implementation 2
public void Add(Person item)
{
if(item == null)
{
throw new ArgumentNullException("Cannot add null to PersonList");
}
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Let's say we go with implementation 1
Null values can now be added in the list
All opoerations implemented on the list will have to handle theese null values
If we should check for and throw a exception in our operation, consumer will be notified about the exception when he/she is calling one of the operations and it will at this state be very unclear what he/she has done wrong (it just wouldn't make any sense to go for this approach).
If we instead choose to go with implementation 2, we make sure input to our library has the quality that we require for our class to operate on it. This means we only need to handle this here and then we can forget about it while we are implementing our other operations.
It will also become more clear for the consumer that he/she is using the library in the wrong way when he/she gets a ArgumentNullException on .Add instead of in .Sort or similair.
To sum it up my preference is to check for valid argument when it is being supplied by the consumer and it's not being handled by the private/internal methods of the library. This basically means we have to check arguments in constructors/methods that are public and takes parameters. Our private/internal methods can only be called from our public ones and they have allready checked the input which means we are good to go!
Using Code Contracts should also be considered when verifying input.
I've got some code that performs some legacy 'database' operation and then processes the result. I want to write a unit test that checks the method that calls the legacy code without interacting with the 'database'.
My code looks something like this:
public static bool CallRoutine(LegacySession session, /* routine params*/)
{
try
{
LegacyRoutine routine = session.CreateRoutine(/* routine params */);
routine.Call();
// Process result
}
catch (LegacyException ex)
{
// Perform error handling
}
}
Were this all my code, I would create interfaces that the LegacySession and LegacyRoutine implement and then write unit tests that use mock implementations of those interfaces using MOQ or something similar. The problem is that I don't have access to the code for LegacyRoutine or LegacySession so I can't make them implement an interface.
Any ideas about how I could do this without changing the production code too much?
If you can't access LegacyRoutine (i'm guessing it's in a referenced DLL), why not just create a wrapper for it, then flick on/off different implementations:
public interface ILegacyWrapper
{
ILegacyRoutine CreateRoutine();
// etc etc
}
public interface ILegacyRoutine
{
// put members of LegacyRoutine
}
Know what i mean? Just mock everything out into wrappers/interfaces.
Then you could go:
ILegacyRoutine routine = session.CreateRoutine(/* routine params */)
Where session would be declared as an ILegacyWrapper, but implemented with a mock concrete.
Also, it goes without saying (but i'll say it anyway), you should consider a DI framework to make your life simpler. Otherwise you'll end with IFoo foo = new Foo() (hard-coded injection) all over the place.
StructureMap is my DI poison of choice.
HTH
You could write a thin wrapper over their API for which you did have an interface. Whether that's a practical thing to do or not rather depends on the size of the API.
Search for C# mock concrete types. Sorry, I have to run, but here's a link to the first thing I found that will solve your problem (there may be better solutions, but this looks OK):
http://docs.typemock.com/isolator/##typemock.chm/Documentation/CreatingFakesWithAAA.html
Also, check out Moq, which I've had great success with in the past
I would advise you to use a depedency injection framework. It helps you to make your classes more loosely copuled by breaking out external class dependencies into objects which are injected into your classes. These objects are often represented by an interface, which helps you to use different implementations in production and when testing. That way you won't have to actually call the external database when testing. I can recommend Ninject. It's makes dependency injection a lot easier than doing it manually.
This pattern pops up a lot. It looks like a very verbose way to move what would otherwise be separate named methods into a single method and then distinguished by a parameter.
Is there any good reason to have this pattern over just having two methods Method1() and Method2() ? The real kicker is that this pattern tends to be invoked only with constants at runtime-- i.e. the arguments are all known before compiling is done.
public enum Commands
{
Method1,
Method2
}
public void ClientCode()
{
//Always invoked with constants! Never user input.
RunCommands(Commands.Method1);
RunCommands(Commands.Method2);
}
public void RunCommands(Commands currentCommand)
{
switch (currentCommand)
{
case Commands.Method1:
// Stuff happens
break;
case Commands.Method2:
// Other stuff happens
break;
default:
throw new ArgumentOutOfRangeException("currentCommand");
}
}
To an OO programmer, this looks horrible.
The switch and enum would need synchronised maintenance and the default case seems like make-work.
The OO programmer would substitute an object with named methods: Then the names like method1 would only appear once in the library. Also all the default cases would be obviated.
Yes, your clients still need to be synchronised with the methods you supply - a static language always insists on method names being known at compile time.
You could argue that this pattern allows you to put shared logging (or other) code for method entry and exit in a single place. But I wouldn't. AOP is a better approach for this sort of thing.
That pattern could be valid if you needed the coupling to be very loose. For example you might have an interface
interface CommandProcessor{
void process(Command c);
}
If you have a method per command then each time you add a new command you would need to add a new method, if you have multiple implementations then you would need to add the method to each Processor. This could be resolved by having some base class, but if the needs diverge you could end up with a very deep class heirarchy as you add new abstraction layers (or you may already be extending another class in with the processor. If it is based on switch's over the constant you can have you default case that handles new cases appropriately by default (exceptions, whatever may be appropriate).
I have used a pattern similar to this in my code with the addition of a factory. The operations started as a small set, but I knew they would be increasing, so I had a mechanism to describe the command and then a factory that produced CommandProcessors. The factory would generate the appropriate processor and then the single method of that processor would accept the command and perform its processing.
That said if your list of command is fairly static and you don't need to worry about how tightly things are coupled then the one-method-per-command approach certainly lends itself to much more readable code.
I can't see any obvious advantages. Quite the opposite; by splitting the blocks into separate methods, each method will be smaller, easier to read and easier to test.
If needed, you could still have the same "entry point" method, where each case would just branch out and call another method. Whether that would be a good or bad idea is impossible to say without knowing more about specific cases. Either way, I would definitely avoid implementing the code for each case in the RunCommands method.
If RunCommands is only ever invoked with the names constants, then I don't see any advantage in this pattern at all.
The only advantage I see (and it could be a big one) would be that the decision between Method1 and Method2 and the code that actually executes the choice could be entirely unrelated. Of course that advantage is lost, when only constants are ever used to invoke RunCommand.
if the code being run inside each case block is completely separate, no value added. however, if there is any common code to be executed before or after the parameter-specific code, this allows it to not be repeated.
still not really the best pattern, though. each separate method could just have calls to helper methods to handle the common code. and if there needs to be another call, but this one doesn't need the common code in front or at the end, the whole model is broken (or you surround that code with and IF). at this point, all value is lost.
so, really, the answer is no.
I'm always looking for a way to use all the tools I can and to stretch myself just beyond where I am at. But as much as I have read about delegates, I can never find a place to use them (like Interfaces, Generics, and a lot of stuff, but I digress.) I was hoping someone could show me when and how they used a delegate in web programming for asp.net c#(2.0 and above).
Thank you and if this wrong for Stack Overflow, please just let me know.
bdukes is right about events. But you're not limited to just using delegates with events.
Study the classic Observer Pattern for more examples on using delegates. Some text on the pattern points toward an event model, but from a raw learning perspective, you don't have to use events.
One thing to remember: A delegate is just another type that can be used & passed around similar to your primitive types such as an "int". And just like "int", a delegate has it's own special characteristics that you can act on in your coding when you consume the delegate type.
To get a really great handle on the subject and on some of it's more advanced and detailed aspects, get Joe Duffy's book, .NET Framework 2.0.
Well, whenever you handle an event, you're using a delegate.
To answer your second question first, I think this is a great question for StackOverflow!
On the first, one example would be sorting. The Sort() method on List takes a delegate to do the sorting, as does the Find() method. I'm not a huge fan of sorting in the database, so I like to use Sort() on my result sets. After all, the order of a list is much more of a UI issue (typically) than a business rule issue.
Edit: I've added my reasons for sorting outside the DB to the appropriate question here.
Edit: The comparison function used in the sort routine is a delegate. Therefore, if you sort a List using the .Sort(Comparison(T)) method the Comparison(T) method you pass to the sort function is a delegate. See the .Sort(Comparison(T)) documentation.
Another quick example off the top of my head would be unit testing with Rhino Mocks. A lot of the things you can do with Rhino Mocks utilize delegates and lambda expressions.
You can use delegates whenever you know you will want to take some action, but the details of that action will depend on circumstances.
Among other things, we use delegates for:
Sorting and filtering, especially if the user can choose between different sorting/filtering criteria
Simplifying code. For example, a longish process where the beginning and end are always the same, but a small middle bit varies. Instead of having a hard-to-read if block in the middle, I have one method for the whole process, and pass in a delegate (Action) for the middle bit.
I have a very useful ToString method in my presentation layer which converts a collection of anything into a comma-separated list. The method parameters are an IEnumerable and a Func delegate for turning each T in the collection into a string. It works equally well for stringing together Users by their FirstName or for listing Projects by their ID.
There isn't anything special to asp.net related to delegates (besides considerations when using async stuff, which is a whole different question), so I will point you to other questions instead:
Delegate Usage : Business Applications
Where do I use delegates?
Another example would be to publish events for user controls.
Eg.
// In your user control
public delegate void evtSomething(SomeData oYourData);
public event evtSomething OnSomething;
// In the page using your user control
ucYourUserControl.OnSomething += ucYourUserControl_OnSomething;
// Then implement the function
protected void ucYourUserControl_OnSelect(SomeData oYourData)
{
...
}
Recently i used the delegates for "delegating" the checking of the permissions.
public Func CheckPermission;
This way, the CheckPermission function can be shared by various controls or classes, say it in a static class or a utilities class, and still be managed centralized, avoiding also Interface explossion; just a thought