Iterate through method's parameters for validation purposes - c#

I've been thinking that it would be useful to be able to do such a thing, for example, to check the parameters for null references and eventually throw an exception.
This would save some typing and also would make it impossible to forget to add a check if a new parameter is added.

Well, not unless you count:
public void Foo(string x, object y, Stream z, int a)
{
CheckNotNull(x, y, z);
...
}
public static void CheckNotNull(params object[] values)
{
foreach (object x in values)
{
if (x == null)
{
throw new ArgumentNullException();
}
}
}
To avoid the array creation hit, you could have a number of overloads for different numbers of arguments:
public static void CheckNotNull(object x, object y)
{
if (x == null || y == null)
{
throw new ArgumentNullException();
}
}
// etc
An alternative would be to use attributes to declare that parameters shouldn't be null, and get PostSharp to generate the appropriate checks:
public void Foo([NotNull] string x, [NotNull] object y,
[NotNull] Stream z, int a)
{
// PostSharp would inject the code here.
}
Admittedly I'd probably want PostSharp to convert it into Code Contracts calls, but I don't know how well the two play together. Maybe one day we'll be able to write the Spec#-like:
public void Foo(string! x, object! y, Stream! z, int a)
{
// Compiler would generate Code Contracts calls here
}
... but not in the near future :)

You can define method parameter with a params keyword. This will make it possible to pass a variable-length number of parameters to your method. You can then iterate over them and check for null references or whatever you want with it.
public void MyMethod(params object[] parameters) {
foreach (var p in parameters) {
...
}
}
// method call:
this.MyMethod("foo", "bar", 123, null, new MyClass());
In my opinion however, it's not a good way of doing things. You will have to manually control the type of your parameters, their position in the input array and you won't be able to use intellisense for them in Visual Studio.

I had the same question some time ago, but wanted to do so for logging purposes. I never found a good solution, but found this link regarding using an AOP based approach for logging method entries and exit. The gist of it is need to use a framework that can read your class and inject code at runtime to do what you you're trying to do. Doesn't sound easy.
How do I intercept a method call in C#?

It is possible to iterate over the parameters that have been declared for a method (via reflection), but I see no way to retrieve the value of those parameters for a specific method call ...
Perhaps you could use Postsharp, and create an aspect in where you perform this check. Then, at compile time, Postsharp can 'weave' the aspect (inject additional code) at every method that you 've written ...

You can look into The Validation Application Block which can be treated as an example of automated validation and The Unity Application Block (in particular its interception feature) in respect to intercepting calls and inspecting parameters.

You could quite probably automate parameter checking with an AOP library such as PostSharp.

After providing a trivial example of a method using the params keyword, RaYell says:
In my opinion however, it's not a good
way of doing things. You will have to
manually control the type of your
parameters, their position in the
input array and you won't be able to
use intellisense for them in Visual
Studio.
I would certainly agree that declaring a method that takes two strings, one int, an object that can be null, and another MyClass object using params is a bad idea. However, there are (in my opinion) perfectly valid and appropriate applications of the params keyword, namely when the parameters are all of the same type. For example:
public T Max<T>(T x, T y) where T : IComparable<T> {
return x.CompareTo(y) > 0 ? x : y;
}
public T Max<T>(params T[] values) where T : IComparable<T> {
T maxValue = values[0];
for (int i = 1; i < values.Length; i++) {
maxValue = Max<T>(maxValue, values[i]);
}
return maxValue;
}

var mandatoryFields = (new object[] { input1, input2 }).Where(v => v == null || v.ToString().Trim() == "");
You will get the list of null parameters.
So then you can just add
if(mandatoryFields.ToArray().Count > 0){}

Related

How to abstract operation from unrelated types?

REMARK after rethinking my issue.
Though the best solution for my exact example with WriteData method was proposed by #Dogu Arslan, the issue in question's title was actually solved by #InBetween and #Fabio. I. e. for my exact example with WriteData method it is better to move the conversion logic out of WriteData method, but to abstract the logic from unrelated types it is better to use proposed overloading. So all the three answers are helpful for me.
I already have read the following similar questions - 1, 2, and others - but not have come with something suitable solution in my case.
My case is: I have the following simple method. The important part is the line with commentary and the "source" parameter. The rest part is not so important and serves only to show that there is an "operation" from the question's title.
void WriteData(
int count,
int currentIndex,
List<byte> source,
StreamWriter sw,
string fileName)
{
var countToWrite = count - currentIndex;
if (countToWrite == 0)
return;
if (sw == null)
sw = new StreamWriter(GetFullPath(fileName));
//---------- Source's elements must be converted to string.
var dataToWrite =
source.GetRange(currentIndex, countToWrite)
.Select(x => Convert.ToString(x));
StringBuilder sb = new StringBuilder();
foreach (var item in dataToWrite)
sb.AppendLine(item);
sw.Write(sb.ToString());
}
For now I want the "source" parameter to be list with bytes, or doubles, or strings. Have I write three copies of WriteData method with only single change - the type of list in "source"? Or there is a better approach?
I tried type constraint, but to what type to constrain?
I tried to check the type and to throw exceptions if it is not in my type list (byte, double, string). But exceptions work only at run-time, and I want it to work in compile-time.
For now I have to restrict myself with run-time type checking as temporary solution, but, as I mentioned before, it is not suitable in perspective.
Why not using overload methods with one "hided" actual solution
public void WriteData(List<byte> source) { WriteData<byte>(source); }
public void WriteData(List<double> source) { WriteData<double>(source); }
public void WriteData(List<string> source) { WriteData<string>(source); }
private void WriteData<T>(List<T> source)
{
// Your actual implementation
}
In private method you can check types and throw exception when wrong type is given, in case you want "protect" from making private method public by "mistake".
The best solution is to publicly expose the needed overloads and then delegate implementation to a private generic method:
public void WriteData( , , List<string> source, , ) { WriteData<string>(...); }
public void WriteData( , , List<byte> source, , ) { WriteData<byte>(...); }
public void WriteData( , , List<double> source, , ) { WriteData<double>(...); }
private void WriteData<T>( , , List<T> source, , )
{
Debug.Assert(typeof(T).Equals(typeof(string)) ||
typeof(T).Equals(typeof(byte)) ||
typeof(T).Equals(typeof(double)));
...
}
One thing you could do is to remove the string conversion responsibility to another class altogether.ie. StringConverter class which would have overloaded constructors that would take these different input types to convert them to string and provide a method that returns a string. That way even if you extend in future the types you support you would not need to change this method here in your example, all of that would be transparent behind you string converter class. The checks will also be compile time because string converter class would take the input through its constructor. Yes this would mean everytime you want to call WriteData method you would need to instantiate a new StringConverter object but C# does not provide type constraints on the level you ask out of the shelf.

if (x is A) { // use x as A

In C# I sometimes have to do something if the object is of some type.
e.g.,
if (x is A)
{
// do stuff but have to cast using (x as A)
}
What would be really nice if inside the if block, we could just use x as if it were an A, since it can't be anything else!
e.g.,
if (x is A)
{
(x as A).foo(); // redundant and verbose
x.foo(); // A contains a method called foo
}
Is the compiler just not smart enough to know this or is there any possible tricks to get similar behavior
Can the Dlang effectively do something similar?
BTW, I'm not looking for dynamic. Just trying to write less verbose code. Obviously I can do var y = x as A; and use y instead of X.
In D, the pattern you'd usually do is:
if(auto a = cast(A) your_obj) {
// use a in here, it is now of type A
// and will correctly check the class type
}
For one statement (or chain-able calls) you can use (x as A)?.Foo() in C# 6.0+ as shown in Is there an "opposite" to the null coalescing operator? (…in any language?).
There is no multiple statements version in the C# language, so if you want you'll need to write your own. I.e. using Action for body of the if statement:
void IfIsType<T>(object x, Action<T> action)
{
if (x is T)
action((T)x);
}
object s = "aaa";
IfIsType<string>(s, x => Console.WriteLine(x.IndexOf("a")));
I believe this is a feature is under consideration for C# 7. See the Roslyn issue for documentation, specifically 5.1 Type Pattern:
The type pattern is useful for performing runtime type tests of reference types, and replaces the idiom
var v = expr as Type;
if (v != null) { // code using v }
With the slightly more concise
if (expr is Type v) { // code using v }`
As for Dlang, I would reference their if statement syntax documentation here.

Is there a technical reason for requiring the "out" and "ref" keywords at the caller?

When calling a method with a ref or out parameter, you have to specify the appropriate keyword when you call the method. I understand that from a style and code quality standpoint (as explained here for example), but I'm curious whether there is also a technical need for the keywords to be specified in the caller.
For example:
static void Main()
{
int y = 0;
Increment(ref y); // Is there any technical reason to include ref here?
}
static void Increment(ref int x)
{
x++;
}
The only technical reason I could think of is overload resolution: you could have
static void Increment(ref int x)
and also
static void Increment(int x)
This is allowed; without ref in the call, the compiler wouldn't be able to tell them apart.
If you're asking whether the language could have been designed so that those aren't needed at the call site, the answer is yes. There is no particular reason they could not have been left out. The compiler has all the information it needs from the metadata, so it could make the proper transformation.
That said, doing so would have made it impossible to have these two overloads:
public void DoSomething(int x);
public void DoSomething(ref int x);
The compiler wouldn't be able to disambiguate that.
Although the ref and out could have been made optional, in which case those overloads would be allowed. And the compiler could either take the default (i.e. the non-ref), or issue an ambiguity error and make you specify which one you really wanted.
All that said, I like having to specify ref and out at the call site. It tells me that the parameter could potentially be modified. Having worked in Pascal for many years, where a var parameter is passed the same way that a value parameter is passed (the syntax at the call site is the same), I much prefer the specificity of C# in that regard.
Another reason to require the modifier is to make changing a parameter to a ref or out a breaking change. If the ref/out could be inferred, then an evil programmer changing a parameter from by-value to by-reference would not be detected by clients compiling against the new signature. If a client called the method
public int Increment(int x)
{
return x + 1;
}
by using
int result = Increment(x);
Suppose an evil developer decided to change the implementation to instead change the value passed by reference and return an error code if, say, the increment resulted in an overflow:
public int Increment(ref int x)
{
x = x + 1;
if(x == int.MinValue) // overflow
return -1;
else
return 0;
}
Then a client building against the signature would not receive a compile error, but it would almost certainly break the calling app.
The compiler needs to know the method parameters it is interpreting when it is building IL.
One reason is having overloads like:
public void (ref int x) {}
public void (int x) {}
Another reason is out will explicitly allow you to use pass-by-value parameter use the interpreted value outside the method. ref will give the pointer to the parameter thereby pointing any new value from the method to the same memory location

method overloading vs optional parameter in C# 4.0 [duplicate]

This question already has answers here:
Should you declare methods using overloads or optional parameters in C# 4.0?
(13 answers)
Closed 9 years ago.
which one is better? at a glance optional parameter seems better (less code, less XML documentation, etc), but why do most MSDN library classes use overloading instead of optional parameters?
Is there any special thing you have to take note when you choose to use optional parameter (or overloading)?
One good use case for 'Optional parameters' in conjunction with 'Named Parameters' in C# 4.0 is that it presents us with an elegant alternative to method overloading where you overload method based on the number of parameters.
For example say you want a method foo to be be called/used like so, foo(), foo(1), foo(1,2), foo(1,2, "hello"). With method overloading you would implement the solution like this,
///Base foo method
public void DoFoo(int a, long b, string c)
{
//Do something
}
/// Foo with 2 params only
public void DoFoo(int a, long b)
{
/// ....
DoFoo(a, b, "Hello");
}
public void DoFoo(int a)
{
///....
DoFoo(a, 23, "Hello");
}
.....
With optional parameters in C# 4.0 you would implement the use case like the following,
public void DoFoo(int a = 10, long b = 23, string c = "Hello")
Then you could use the method like so - Note the use of named parameter -
DoFoo(c:"Hello There, John Doe")
This call takes parameter a value as 10 and parameter b as 23.
Another variant of this call - notice you don't need to set the parameter values in the order as they appear in the method signature, the named parameter makes the value explicit.
DoFoo(c:"hello again", a:100)
Another benefit of using named parameter is that it greatly enhances readability and thus code maintenance of optional parameter methods.
Note how one method pretty much makes redundant having to define 3 or more methods in method overloading. This I have found is a good use case for using optional parameter in conjunction with named parameters.
Optional Parameters provide issues when you expose them publicly as API. A rename of a parameter can lead to issues. Changing the default value leads to issues (See e.g. here for some info: Caveats of C# 4.0 optional parameters)
Also, optional params can only be used for compile-time constants. Compare this:
public static void Foo(IEnumerable<string> items = new List<string>()) {}
// Default parameter value for 'items' must be a compile-time constant
to this
public static void Foo() { Foo(new List<string>());}
public static void Foo(IEnumerable<string> items) {}
//all good
Update
Here's some additional reading material when a constructor with default parameters does not play nicely with Reflection.
I believe they serve different purposes. Optional parameters are for when you can use a default value for a parameter, and the underlying code will be the same:
public CreditScore CheckCredit(
bool useHistoricalData = false,
bool useStrongHeuristics = true) {
// ...
}
Method overloads are for when you have mutually-exclusive (subsets of) parameters. That normally means that you need to preprocess some parameters, or that you have different code altogether for the different "versions" of your method (note that even in this case, some parameters can be shared, that's why I mentioned "subsets" above):
public void SendSurvey(IList<Customer> customers, int surveyKey) {
// will loop and call the other one
}
public void SendSurvey(Customer customer, int surveyKey) {
...
}
(I wrote about this some time ago here)
This one almost goes without saying, but:
Not all languages support optional parameters. If you want your libraries to be friendly to those languages, you have to use overloads.
Granted, this isn't even an issue for most shops. But you can bet it's why Microsoft doesn't use optional parameters in the Base Class Library.
Neither is definitively "better" than the other. They both have their place in writing good code. Optional parameters should be used if the parameters can have a default value. Method overloading should be used when the difference in signature goes beyond not defining parameters that could have default values (such as that the behavior differs depending on which parameters are passed, and which are left to the default).
// this is a good candidate for optional parameters
public void DoSomething(int requiredThing, int nextThing = 12, int lastThing = 0)
// this is not, because it should be one or the other, but not both
public void DoSomething(Stream streamData = null, string stringData = null)
// these are good candidates for overloading
public void DoSomething(Stream data)
public void DoSomething(string data)
// these are no longer good candidates for overloading
public void DoSomething(int firstThing)
{
DoSomething(firstThing, 12);
}
public void DoSomething(int firstThing, int nextThing)
{
DoSomething(firstThing, nextThing, 0);
}
public void DoSomething(int firstThing, int nextThing, int lastThing)
{
...
}
Optional parameters has to be last. So you can not add an extra parameter to that method unless its also optional. Ex:
void MyMethod(int value, int otherValue = 0);
If you want to add a new parameter to this method without overloading it has to be optional. Like this
void MyMethod(int value, int otherValue = 0, int newParam = 0);
If it can't be optional, then you have to use overloading and remove the optional value for 'otherValue'. Like this:
void MyMethod(int value, int otherValue = 0);
void MyMethod(int value, int otherValue, int newParam);
I assume that you want to keep the ordering of the parameters the same.
So using optional parameters reduces the number of methods you need to have in your class, but is limited in that they need to be last.
Update
When calling methods with optional parameters, you can used named parameters like this:
void MyMethod(int value, int otherValue = 0, int newValue = 0);
MyMethod(10, newValue: 10); // Here I omitted the otherValue parameter that defaults to 0
So optional parameters gives the caller more possibilities.
One last thing. If you use method overloading with one implementation, like this:
void MyMethod(int value, int otherValue)
{
// Do the work
}
void MyMethod(int value)
{
MyMethod(value, 0); // Do the defaulting by method overloading
}
Then when calling 'MyMethod' like this:
MyMethod(100);
Will result in 2 method calls. But if you use optional parameters there is only one implementation of 'MyMethod' and hence, only one method call.
What about a 3rd option: pass an instance of a class with properties corresponding to various "optional parameters".
This provides the same benefit as named and optional parameters, but I feel that this is often much clearer. It gives you an opportunity to logically group parameters if necessary (i.e. with composition) and encapsulate some basic validation as well.
Also, if you expect clients that consume your methods to do any kind of metaprogramming (such as building linq expressions involving your methods), I think that keeping the method signature simple has its advantages.
A good place to use optional parameter is WCF since it does not support method overloading.
This is not really an answer to the original question, but rather a comment on #NileshGule's answer, but:
a) I don't have enough reputation points to comment
b) Multiple lines of code is quite hard to read in comments
Nilesh Gule wrote:
One benefit of using optional parameters is that you need not have to do a conditional check in your methods like if a string was null or empty if one of the input parameter was a string. As there would be a default value assigned to the optional parameter, the defensive coding will be reduced to a great extent.
This is actually incorrect, you still have to check for nulls:
void DoSomething(string value = "") // Unfortunately string.Empty is not a compile-time constant and cannot be used as default value
{
if(value == null)
throw new ArgumentNullException();
}
DoSomething(); // OK, will use default value of ""
DoSomething(null); // Will throw
If you supply a null string reference, it will not be replaced by the default value. So you still need to check the input parameters for nulls.
To address your first question,
why do most MSDN library classes use
overloading instead of optional
parameters?
It is for backward compatibility.
When you open a C# 2, 3.0 or 3.5 project in VS2010, it is automatically upgraded.
Just imagine the inconvenience it would create if each of the overloads used in the project had to be converted to match the corresponding optional parameter declaration.
Besides, as the saying goes, "why fix what is not broken?". It is not necessary to replace overloads that already work with new implementations.
One benefit of using optional parameters is that you need not have to do a conditional check in your methods like if a string was null or empty if one of the input parameter was a string. As there would be a default value assigned to the optional parameter, the defensive coding will be reduced to a great extent.
Named parameters give the flexibility of passing parameter values in any order.

Using IComparer<> with delegate function to search

This feels like a too easy question to be found with google, I think/hope I've got stuck in the details when trying to implement my own version of it. What I'm trying to do is to sort a list of MyClass objects depending on my Datatype object different search functions should be used.
I've had something like this in mind for the class Datatype:
class Datatype {
public delegate int CMPFN(object x, object y);
private CMPFN compareFunction;
(...)
private XsdDatatype((...), CMPFN compareFunction) {
(...)
this.compareFunction = compareFunction;
}
public CMPFN GetCompareFunction() {
return this.compareFunction;
}
static private int SortStrings(object a, object b) {
return ((MyClass)a).GetValue().CompareTo(((MyClass)b).GetValue());
}
}
And later on I'm trying to sort a MyClass list something like this:
List<MyClass> elements = GetElements();
Datatype datatype = new Datatype((...), Datatype.SortStrings);
elements.Sort(datatype.GetCompareFunction()); // <-- Compile error!
I'm not overly excited about the cast in Datatype.SortStrings but it feels like this could work(?). The compiler however disagrees and gets me this error on the last line above and I'm a bit unsure exactly why CMPFN can't be converted/casted(?) to IComparer.
Cannot convert type 'proj.Datatype.CMPFN' to 'System.Collections.Generic.IComparer<proj.MyClass>'
Delegates aren't duck-typed like that. You can create an Comparison<MyClass> from a CMPFN but you can't use a plain reference conversion - either implicit or explicit.
Three options:
Create the comparer like this:
elements.Sort(new Comparison<MyClass>(datatype.GetCompareFunction()));
Use a lambda expression to create a Comparison<T> and use that instead:
elements.Sort((x, y) => datatype.GetCompareFunction()(x, y));
Write an implementation of IComparer<MyClass> which performs the comparison based on a CMPFN
Note that the second approach will call GetCompareFunction once per comparison.
A much better solution would be to get rid of CMPFN entirely - why not just use (or implement) IComparer<MyClass> to start with? Note that that would remove the casts as well. (If you're happy using delegates instead of interfaces, you could express the comparison as a Comparison<MyClass> instead.)
Note that as of .NET 4.5, you can use Comparer.Create to create a Comparer<T> from a Comparison<T> delegate.
I'm not sure why your current API is in terms of object, but you should be aware that in C# 3 and earlier (or C# 4 targeting .NET 3.5 and earlier) you wouldn't be able to convert an IComparer<object> into an IComparer<MyClass> (via a reference conversion, anyway). As of C# 4 you can, due to generic contravariance.
There are a number of overloads of List<T>.Sort, but there are none which take a delegate with the parameters you have defined (two objects).
However, there is an overload that takes a Comparison<T> delegate, which you can work with your code with a few minor modifications. Basically, you just replace your CMPFN delegate with Comparison<MyClass> - as an added bonus, you get strong-typing in your SortStrings function, too:
static private int SortStrings(MyClass a, MyClass b) {
return a.GetValue().CompareTo(b.GetValue());
}
public Comparison<MyClass> GetCompareFunction() {
return SortStrings; // or whatever
}
...
elements.Sort(datatype.GetCompareFunction());
Try something like this
class AttributeSort : IComparer<AttributeClass >
{
#region IComparer Members
public int Compare(AttributeClass x, AttributeClass y)
{
if (x == null || y == null)
throw new ArgumentException("At least one argument is null");
if (x.attributeNo == y.attributeNo) return 0;
if (x.attributeNo < y.attributeNo) return -1;
return 1;
}
#endregion
}
You can call it then like this
List<AttributeClass> listWithObj ....
listWithObj.Sort(new AttributeSort());
Should work like you want. You can create a type-safe comparer class as well.

Categories

Resources