I read today about C# 4.0 code contracts. It seems like the common practice for validating a parameter to a method isn't null is as follows:
Contract.Requires(p != null);
However it seems quite unreasonable to me that I'd have to do this for every parameter of every interface method in my code. In the vast majority of cases, the parameters are expected not to be null. I'd expect there would be some sort of mechanism that allows defining some specific parameters are "allowed" to be null (similarly to the "#Nullable" annotation in Java), and that the Contracts framework will automatically ensure the rest aren't null.
Besides saving much time on this "boilerplate checks" (as well as many "Contracts classes", as many times there simply aren't any conditions to be verified except for non-null parameters), it'll also make the contracts code cleaner and more "logic-oriented".
My question is, is there any way to do this, and if not, where isn't there one, or possibly why is my approach here wrong?
I don't agree, null is very helpful when you need to check if something didn't initialized yet, or data was not found, and sometimes you'll want to pass null to a method and its fine, the code contracts are good for common methods that serves lots of classes, and for api definitions. If you write in a layered architecture you just need to protect the interactions between the layers, and you are null safe inside each layer.
your domain got nulls, and its ok.
Related
I've recently joined a new project, and our data access layer has a utility method that takes a list of objects and throws if any of them are null, which is normally called at the top of the method. It's pretty handy, except that ReSharper has no idea what it does and thus shows a bunch of "Possible NullReferenceException" warnings in the methods that use it. Is there any way to configure it to know that this method ensures that the objects passed to it aren't null?
I've got a simple factory that's built in C# that instantiates and configures validators that are built in ASP.net and JavaScript. I want a way to test if I'm accidently trying to set a validator twice (for example, having two RequiredValueValidators is not a great idea and could cause ui/ux problems) on the same Control, but I also wish to make sure that validators that use the same same building mechanisms, but in a different way, are preserved (such as two RegularExpressionValidators that use different RE, but not two that use the same RE.)
I've tried a few different possible techniques that I'll detail these as answers below- but in essence I need a technique to pass a description on how to compare two validators of the same base type to discern if they are equal( N.B. 'equal' is NOT 'identical', they could have different IDs (etc) but still do the same job.) that's interpretable at runtime and accessible to other areas of my c# .dll to actually run the check.
My answers will be community wiki with the intent that errors/pitfalls that I fell into will be edited out/corrected/discussed by the community, rather than being merely downvoted for being initially incorrect, so that others' won't suffer the same fate.
One attempt I've had is to set a predicate as an attribute of the method in my factory that builds the validator. This would then be accessed by using reflection somewhere else and then used to compare two potential validators.
A major flaw in this is that you cannot set predicates (or delegates for that matter) as attributes.
A possible work-around is where you give a an individual property (containing the predicate delegate or IEquatable<> implementation and then retrieve that - however there are a lot of different things to consider when comparing validators (what type, configurations, does it rely on other controls etc....) so unless you can create a base class or interface that can deal with different types of IEquatable<ValidatorType> this is also impossible...
I've also tried creating a small, static switch case method within my factory that will be able to simply output another small configurations class that would be created by the switch case. This in essence is much simpler than the previous question, but it's not without it's problems. For example, I cannot define my return and parameter types correctly so that a RegularExpressionValidator can check if it's correct within the same code block as a ValidDateValidatorcheck.
I would like a way to get warnings when an object reference could potentially throw a Null Reference Exception, so that I can write defensive code for these.
I have looked at Resharper, but didn't see anything there that accomplishes this.
Code Contracts is probably a non-starter; the application is quite large, and it's written in .NET 3.5, before Code Contracts became officially available.
Resharper does in fact accomplish something like this. Possible NullReferenceExpections are highlighted in the IDE in blue, with tooltips when you hover over them.
Resharper then keeps track of potential errors and warnings in it's own inspection results window (separate from Visual Studio's compiler errors and warnings).
Generally speaking, unless you specifically initialized an object, It can always have the potential to throw a null object reference, at least as far as the compiler is concerned.
in order for an algorithm to check whether a reference to the object can potentially be null, it would have to traverse every possible path that your program can take, and that includes paths in any external libraries that you may be using. Even for the simplest of programs, such an algorithm would kill the performance of your compiler.
I'm against the idea of blindly defending against null for each field available in the code and inside each method.
The following help me deciding about where to check against null values:
1- Who will be invoking your methods?
If a method is private and you have control over how's it's being accessed, I don't see it makes sense to protect against null checks unless it's part of the method's logic to expect null values.
If a method is exposed to the public (Such as an API), then of course null checks should be a huge concern.
2- Software Design:
Image you have are calling method1(fromAnimalToString(animal)); and for some reason fromAnimalToString() never returns null (Though might return an empty string instead).
Then in such case, it wouldn't make sense to check animal != null in method1()'s body
3- Testing:
In software engineering, it's almost impossible to test all possible scenarios that can ever execute. However, test normal and alternative scenarios and make sure the flow is as expected.
I'm designing some services and I would like to get some feedback about the conventions I'm using.
For all operations, I always define a 'Context' object and a 'Result' one, because of the following advantages:
extensibility: I can add parameters to the context or objects to the result without changing the interface
compactness: I only have a single object in the definition, even if I need many parameters
Example:
[OperationContract]
DoSomethingResult DoSomething(DoSomethingContext context)
Anyway, I'm not really sure that this is the best way to do it because of the following reasons:
overhead: I always wrap the response properties into an object. Sometimes, the Result object has no properties
versioning: WCF has built-in versioning for contracts, and maybe it could be better to use a different version to inform about the difference
In fact I use the same technique with normal methods too, so it would be important for me to get some feedback, advices, critics and so on and so forth.
Thank you
I think that's a perfectly legitimate way to write your contracts. I've worked on a number of projects with these sort of contracts and it is has been a pleasure - very easy during development (just add a property to the object and you're done), a straightforward and clear pattern that applies to all services, and allows for things like a single validation method for all operations.
In response to your concerns:
I don't think the overhead of creating an empty object is at all significant. Don't worry about this unless it becomes an issue.
If the Result object has no properties (i.e. you aren't returning anything) then simply return void. You aren't gaining anything by returning an empty object.
You can (and probably should) version these objects as you version your contracts. What you are doing in no way precludes you from versioning your objects.
Please note that versioning objects does not mean changing them to DoSomethingResult_v1, DoSomethingResult_v2 as I've seen before. You should version with namespaces; it makes things clearer and cleaner. Just put a version in the XML namespaces in both the operation contract and data member attributes.
I don't think there are any performance concerns here, and the code looks easy to work with from the code-owners perspective.
My big concern is that it isn't at all clear from the consumers perspective how your service works. They would have to rely on separate documentation or error messages.
It would be much easier for someone unfamiliar with your code (i.e. just downloaded the WSDL) to consume your service if the parameters that it required were declared. You also get a good degree of validation out of the box.
To illustrate:
[OperationContract]
DoSomethingResult DoSomething(DoSomethingContext context)
vs
[OperationContract]
[FaultContract(typeof(CustomerNotFoundFault))]
Customer GetCustomer(UInt32 customerId)
This point is mostly relevant to the design of APIs. Where this isn't so relevant, is where you are both the author and the consumer of the service.
I totally support Kirk Broadhurst's suggestion of using namespaces for versioning. I use that and it works well.
EDIT: on a second reading, I think I misread your post. I was assuming here that your parameter and return value objects were some generic object that you use across all services. If indeed they are specific to each service, then that's a great approach which I've used successfully on many occasions. You'll do well with it.
This is mostly a request for comments if there is a reason I should not go down this road.
I have a multi-tierd, CodeSmith generated application. At the UI level there need to be some fields that are required, and the required fields will vary depending on field values in the bound entity. What I am thinking of doing is adding a "PropertyRequired" CustomAttribute to each property in the entities that I can set true or false when I load the entity in its manager. Then I will use Reflection to query the property and give visual feedback to the user at the UI level, and I can validate that all the required properties have a valid value in the manager before I save. I've worked this out as a proof of concept with one property in one entity, but before I try to extend it to the rest of the application I'd like to ask if there is someone with more experience to either tell me to go for it, or why I won't like it when I scale up. If this is a bad idea, or if you can suggest a better approach please offer your opinion.
It is a pretty reasonable way to do it (I've done something very similar before) - but there are always downsides:
any code needing the entity will need the extra reference (assuming that the attribute and entity are in different assemblies)
the values (unless you are clever about it) must be determined at compile-time
you can't use it on entities outside of your control
In most cases the above aren't a problem. If they are an issue, you might want to support an external metadata model - but unless you need it, this would be overkill. Don't do it unless you must (meaning: go ahead and use attributes; they are usually fine).
There is no inherent reason to avoid custom attributes. It is a supported CLR feature which is the backbone for many available products (Code Contracts, FxCop, etc ...).
This is not an unreasonable approach and healthier than baking this stuff into a UI tier. There are a couple of points worth considering before taking the full dive:
You are tightly coupling business logic with the business entity itself. Are there circumstances where a field being required or valid values could change? You may be limiting yourself or be faced with an inconsistent validation mechanism
Dynamic assignment is possible but more tricky - i.e. when you set a field to be required thats what it will be unless you override
Custom attributes can be quite inflexible if further down the line you wanted to do something more complicated - namely if you need to pass state into an attribute driven validation scheme. Attributes like declarative assignment. Only having a true/false required property shouldn't be an issue here though
Just being a devils advocate really, in general for a fairly simple application where you only care about required fields, this is quite a tidy way of doing it