Why not a memberinfo() reflection function for C# [duplicate] - c#

This question already has answers here:
Why is there not a `fieldof` or `methodof` operator in C#? [closed]
(4 answers)
Closed 9 years ago.
There is sizeof() and typeof(), but why not a memberinfo() returning an instance of System.Reflection.MemberInfo for the part of code selected in order to aid in reflection code.
Example:
Program()
{
Type t = typeof(Foo);
Foo foo = new Foo();
PropertyInfo pi = memberinfo(Foo.Name) as PropertyInfo;
// or shall it be like this
// PropertyInfo pi = memberinfo(foo.Name) as PropertyInfo;
string name = pi.GetValue(foo, null);
}
I am trying to understand if there is a fundamental reason why this could be implemented in the C# spec.
I am not bashing anything, I am just doing some wishful thinking, so be kind please.

Eric Lippert talks about this extensively on his blog
To quote directly from that post:
Just off the top of my head, here are a few {reasons why this hasn't been done}. (1) How do you unambiguously specify that you want a method info of an specific explicit interface implementation? (2) What if overload resolution would have skipped a particular method because it is not accessible? It is legal to get method infos of methods that are not accessible; metadata is always public even if it describes private details. Should we make it impossible to get private metadata, making the feature weak, or should we make it possible, and make infoof use a subtly different overload resolution algorithm than the rest of C#? (3) How do you specify that you want the info of, say, an indexer setter, or a property getter, or an event handler adder?

There are a couple of items which make this type of feature difficult. One of the primary ones being overloaded methods.
class Example {
public void Method() {}
public void Method(int p1) {}
}
Which MethodInfo would the following return?
var info = memberinfo(Example.Method);
As Wesley has pointed out though, Eric Lippert's Blog has the full discussion on this issue.

There are numerous reasons why compile-time member reflection has not yet been implemented in C# - but most of them basically boil down to opportunity cost - there are many other languages features and enhancements that offer more benefit to more users. There's also the consideration that an infoof syntax could be complicated, confusing, and ultimately less powerful than using string-based reflection. It also wouldn't be a complete replacement for reflection since in many instances the metadata being manipulated isn't known at compile time.
However, all is not lost, there are a number of tricks that you can employ to perform slightly safer reflection that leverages capabilities of the C# language. For instance, we can take advantage of lambda expressions and expression trees to extract MemberInfo information. A simple example is:
public static class MethodExt {
static MethodInfo MemberInfo(Action d) {
return d.Method;
}
// other overloads ...
}
which works when you pass in a (non-anonymous) action delegate:
MethodInfo mi = MethodExt.MemberInfo( Object.ToString );
An implementation of the above using expression trees can more robust and flexible, but also substantially more complicated. It could be used to represent member and property access, indexers, etc.
The main issue with all such "fancy" approaches, is that they are confusing to developers who are used to seeing traditional reflection code. They also can't handle all cases, which often results in an unfortunate mixture of traditional reflection code and fancy expression tree code. Personally, while such techniques are interesting and inventive, it's probably best to avoid it in production code.

I myself use an approach that reads the IL from an anonymous method (using Mono.Reflection namespace) and grabs the info of the last token found in the anonymous method. This tends to be the only way to get information about things like the add_EventHandler or set_Property or captured local variables. To actual get properties I use expression trees.
The syntax that I use is Reflect.Member<T>.InfoOf<TMember>(Func<T,TMember> memberfunc) where Member is replaced with the type I'm interested in. It's verbose sure, but it lets a user know exactly what the code is trying to do. I also have Reflect.Member styles for things like statics and constructors.
Here is the relevant code snippet::
internal static MemberInfo GetLastMemberOfType(MethodBase method, MemberTypes memberType)
{
var instructions = method.GetInstructions();
var operandtype = memberType == MemberTypes.Field ? OperandType.InlineField : OperandType.InlineMethod;
for (int i = instructions.Count-1; i > -1 ; i--)
{
if (instructions[i].OpCode.OperandType == operandtype)
{
return instructions[i].Operand as MemberInfo;
}
}
return null;
}
Does it replace string based reflection? Absolutely not. Does it make my code safer while I'm refactoring interfaces and what not? Absolutely.
Will it ship with the product I'm working on? Probably not.

Related

Is there a simpler way to write this; class Example<T1, T2> where T1: T3<T2>? [duplicate]

This question already has answers here:
Partial generic type inference possible in C#?
(6 answers)
Closed 4 years ago.
I have the following method
public bool HasTypeAttribute<TAttribute, TType>(TType obj)
{
return typeof(TType).GetCustomAttribute<TAttribute>() != null;
}
and I want to be able to use it like this:
MyClass instance = new MyClass();
TypeHelper.HasTypeAttribute<SerializableAttribute>(instance);
but I can't get it working because of the
incorrect number of type parameters
so that I need to call
TypeHelper.HasTypeAttribute<SerializableAttribute, MyClass>(instance);
which certainly makes sense, but why can the compiler not infer the type of the passed object? Because if the method looked like this:
public void Demo<T>(T obj)
{
}
the compiler would certainly be able to infer the type, so that I can write
Foo.Demo(new Bar());
instead of
Foo.Demo<Bar>(new Bar());
So, is there a way to make type inference work in this case? Is it a design flaw by me or can I achieve what I want? Reordering the parameters doesn't help too...
You could break the call into multiple steps, which lets type inference kick in wherever it can.
public class TypeHelperFor<TType>
{
public bool HasTypeAttribute<TAttribute>() where TAttribute : Attribute
{
return typeof(TType).GetCustomAttribute<TAttribute>() != null;
}
}
public static class TypeHelper
{
public static TypeHelperFor<T> For<T>(this T obj)
{
return new TypeHelperFor<T>();
}
}
// The ideal, but unsupported
TypeHelper.HasTypeAttribute<SerializableAttribute>(instance);
// Chained
TypeHelper.For(instance).HasTypeAttribute<SerializableAttribute>();
// Straight-forward/non-chained
TypeHelper.HasTypeAttribute<SerializableAttribute, MyClass>(instance);
That should work alright for this case, but I'd warn against using it in cases where the final method returns void, because it's too easy to leave the chain half-formed if you're not doing anything with the return value.
e.g.
// If I forget to complete the chain here...
if (TypeHelper.For(instance)) // Compiler error
// But if I forget the last call on a chain focused on side-effects, like this one:
// DbHelper.For(table).Execute<MyDbOperationType>();
DbHelper.For(table); // Blissfully compiles but does nothing
// Whereas the non-chained version would protect me
DbHelper.Execute<MyTableType, MyDbOperationType>(table);
Because the C# rules don't allow that.
It would be feasible for C# to have a rule where if some of the types related to parameters (and hence could be inferred at least some of the time) and the number of explicitly given types was the same as the number of remaining un-inferrable types, then the two would work in tandem.
This would need someone to propose it, convince other people involved in the decision-making around C# it was a good idea, and then for it to be implemented. This has not happened.
Aside from the fact that features start off having to prove themselves worth the extra complexity they bring (add anything to a language and it is immediately more complex with more work, more chance of compiler bugs etc.) the question then is, is it a good idea?
On the plus, your code in this particular example would be better.
On the negative, everyone's code is now slightly more complicated, because there's more ways something that is wrong can be wrong, resulting either in code that fails at runtime rather than compile-time or less useful error messages.
That people already find some cases around inference to be confusing would point to the idea that adding another complicating case would not be helpful.
Which isn't to say that it is definitely a bad idea, just that there are pros and cons that make it a matter of opinion, rather than an obvious lack.

Infer generic type with two generic type parameters [duplicate]

This question already has answers here:
Partial generic type inference possible in C#?
(6 answers)
Closed 4 years ago.
I have the following method
public bool HasTypeAttribute<TAttribute, TType>(TType obj)
{
return typeof(TType).GetCustomAttribute<TAttribute>() != null;
}
and I want to be able to use it like this:
MyClass instance = new MyClass();
TypeHelper.HasTypeAttribute<SerializableAttribute>(instance);
but I can't get it working because of the
incorrect number of type parameters
so that I need to call
TypeHelper.HasTypeAttribute<SerializableAttribute, MyClass>(instance);
which certainly makes sense, but why can the compiler not infer the type of the passed object? Because if the method looked like this:
public void Demo<T>(T obj)
{
}
the compiler would certainly be able to infer the type, so that I can write
Foo.Demo(new Bar());
instead of
Foo.Demo<Bar>(new Bar());
So, is there a way to make type inference work in this case? Is it a design flaw by me or can I achieve what I want? Reordering the parameters doesn't help too...
You could break the call into multiple steps, which lets type inference kick in wherever it can.
public class TypeHelperFor<TType>
{
public bool HasTypeAttribute<TAttribute>() where TAttribute : Attribute
{
return typeof(TType).GetCustomAttribute<TAttribute>() != null;
}
}
public static class TypeHelper
{
public static TypeHelperFor<T> For<T>(this T obj)
{
return new TypeHelperFor<T>();
}
}
// The ideal, but unsupported
TypeHelper.HasTypeAttribute<SerializableAttribute>(instance);
// Chained
TypeHelper.For(instance).HasTypeAttribute<SerializableAttribute>();
// Straight-forward/non-chained
TypeHelper.HasTypeAttribute<SerializableAttribute, MyClass>(instance);
That should work alright for this case, but I'd warn against using it in cases where the final method returns void, because it's too easy to leave the chain half-formed if you're not doing anything with the return value.
e.g.
// If I forget to complete the chain here...
if (TypeHelper.For(instance)) // Compiler error
// But if I forget the last call on a chain focused on side-effects, like this one:
// DbHelper.For(table).Execute<MyDbOperationType>();
DbHelper.For(table); // Blissfully compiles but does nothing
// Whereas the non-chained version would protect me
DbHelper.Execute<MyTableType, MyDbOperationType>(table);
Because the C# rules don't allow that.
It would be feasible for C# to have a rule where if some of the types related to parameters (and hence could be inferred at least some of the time) and the number of explicitly given types was the same as the number of remaining un-inferrable types, then the two would work in tandem.
This would need someone to propose it, convince other people involved in the decision-making around C# it was a good idea, and then for it to be implemented. This has not happened.
Aside from the fact that features start off having to prove themselves worth the extra complexity they bring (add anything to a language and it is immediately more complex with more work, more chance of compiler bugs etc.) the question then is, is it a good idea?
On the plus, your code in this particular example would be better.
On the negative, everyone's code is now slightly more complicated, because there's more ways something that is wrong can be wrong, resulting either in code that fails at runtime rather than compile-time or less useful error messages.
That people already find some cases around inference to be confusing would point to the idea that adding another complicating case would not be helpful.
Which isn't to say that it is definitely a bad idea, just that there are pros and cons that make it a matter of opinion, rather than an obvious lack.

more advantages or disadvantages to delegate members over classic functions?

class my_class
{
public int add_1(int a, int b) {return a + b;}
public func<int, int, int> add_2 = (a, b) => {return a + b;}
}
add_1 is a function whereas add_2 is a delegate. However in this context delegates can forfill a similar role.
Due to precedent and the design of the language the default choice for C# methods should be functions.
However both approaches have pros and cons so I've produced a list. Are there any more advanteges or disadvantages to either approach?
Advantages to conventional methods.
more conventional
outside users of the function see named parameters - for the add_2 syntax arg_n and a type is generally not enough information.
works better with intellisense - ty Minitech
works with reflection - ty Minitech
works with inheritance - ty Eric Lippert
has a "this" - ty CodeInChaos
lower overheads, speed and memory - ty Minitech and CodeInChaos
don't need to think about public\private in respect to both changing and using the function. - ty CodeInChaos
less dynamic, less is permitted that is not known at compile time - ty CodeInChaos
Advantages to "field of delegate type" methods.
more consistant, not member functions and data members, it's just all just data members.
can outwardly look and behave like a variable.
storing it in a container works well.
multiple classes could use the same function as if it were each ones member function, this would be very generic, concise and have good code reuse.
straightforward to use anywhere, for example as a local function.
presumably works well when passed around with garbage collection.
more dynamic, less must be known at compile time, for example there could be functions that configure the behaviour of objects at run time.
as if encapsulating it's code, can be combined and reworked, msdn.microsoft.com/en-us/library/ms173175%28v=vs.80%29.aspx
outside users of the function see unnamed parameters - sometimes this is helpfull although it would be nice to be able to name them.
can be more compact, in this simple example for example the return could be removed, if there were one parameter the brackets could also be removed.
roll you'r own behaviours like inheritance - ty Eric Lippert
other considerations such as functional, modular, distributed, (code writing, testing or reasoning about code) etc...
Please don't vote to close, thats happened already and it got reopened. It's a valid question even if either you don't think the delegates approach has much practical use given how it conflicts with established coding style or you don't like the advanteges of delegates.
First off, the "high order bit" for me with regards to this design decision would be that I would never do this sort of thing with a public field/method. At the very least I would use a property, and probably not even that.
For private fields, I use this pattern fairly frequently, usually like this:
class C
{
private Func<int, int> ActualFunction = (int y)=>{ ... };
private Func<int, int> Function = ActualFunction.Memoize();
and now I can very easily test the performance characteristics of different memoization strategies without having to change the text of ActualFunction at all.
Another advantage of the "methods are fields of delegate type" strategy is that you can implement code sharing techniques that are different than the ones we've "baked in" to the language. A protected field of delegate type is essentially a virtual method, but more flexible. Derived classes can replace it with whatever they want, and you have emulated a regular virtual method. But you could build custom inheritence mechanisms; if you really like prototype inheritance, for example, you could have a convention that if the field is null, then a method on some prototypical instance is called instead, and so on.
A major disadvantage of the methods-are-fields-of-delegate-type approach is that of course, overloading no longer works. Fields must be unique in name; methods merely must be unique in signature. Also, you don't get generic fields the way that we get generic methods, so method type inference stops working.
The second one, in my opinion, offers absolutely no advantage over the first one. It's much less readable, is probably less efficient (given that Invoke has to be implied) and isn't more concise at all. What's more, if you ever use reflection it won't show up as being a method so if you do that to replace your methods in every class, you might break something that seems like it should work. In Visual Studio, the IntelliSense won't include a description of the method since you can't put XML comments on delegates (at least, not in the same way you would put them on normal methods) and you don't know what they point to anyway, unless it's readonly (but what if the constructor changed it?) and it will show up as a field, not a method, which is confusing.
The only time you should really use lambdas is in methods where closures are required, or when it's offers a significant convenience advantage. Otherwise, you're just decreasing readability (basically the readability of my first paragraph versus the current one) and breaking compatibility with previous versions of C#.
Why you should avoid delegates as methods by default, and what are alternatives:
Learning curve
Using delegates this way will surprise a lot of people. Not everyone can wrap their head around delegates, or why you'd want to swap out functions. There seems to be a learning curve. Once you get past it, delegates seem simple.
Perf and reliability
There's a performance loss to invoking delegates in this manner. This is another reason I would default to traditional method declaration unless it enabled something special in my pattern.
There's also an execution safety issue. Public fields are nullable. If you're passed an instance of a class with a public field you'll have to check that it isn't null before using it. This hurts perf and is kind of lame.
You can work around this by changing all public fields to properties (which is a rule in all .Net coding standards anyhow). Then in the setter throw an ArgumentNullException if someone tries to assign null.
Program design
Even if you can deal with all of this, allowing methods to be mutable at all goes against a lot of the design for static OO and functional programming languages.
In static OO types are always static, and dynamic behavior is enabled through polymorphism. You can know the exact behavior of a type based on its run time type. This is very helpful in debugging an existing program. Allowing your types to be modified at run time harms this.
In both static OO and function programming paradigms, limiting and isolating side-effects is quite helpful, and using fully immutable structures is one of the primary ways to do this. The only point of exposing methods as delegates is to create mutable structures, which has the exact opposite effect.
Alternatives
If you really wanted to go so far as to always use delegates to replace methods, you should be using a language like IronPython or something else built on top of the DLR. Those languages will be tooled and tuned for the paradigm you're trying to implement. Users and maintainers of your code won't be surprised.
That being said, there are uses that justify using delegates as a substitute for methods. You shouldn't consider this option unless you have a compelling reason to do so that overrides these performance, confusion, reliability, and design issues. You should only do so if you're getting something in return.
Uses
For private members, Eric Lippert's answer describes a good use: (Memoization).
You can use it to implement a Strategy Pattern in a function-based manner rather than requiring a class hierarchy. Again, I'd use private members for this...
...Example code:
public class Context
{
private Func<int, int, int> executeStrategy;
public Context(Func<int, int, int> executeStrategy) {
this.executeStrategy = executeStrategy;
}
public int ExecuteStrategy(int a, int b) {
return executeStrategy(a, b);
}
}
I have found a particular case where I think public delegate properties are warrented: To implement a Template Method Pattern with instances instead of derived classes...
...This is particularly useful in automated integration tests where you have a lot of setup/tear down. In such cases it often makes sense to keep state in a class designed to encapsulate the pattern rather than rely on the unit test fixture. This way you can easily support sharing the skeleton of the test suite between fixtures, without relying on (sometimes shoddy) test fixture inheritance. It also might be more amenable to parallelization, depending on the implementation of your tests.
var test = new MyFancyUITest
{
// I usually name these things in a more test specific manner...
Setup = () => { /* ... */ },
TearDown = () => { /* ... */ },
};
test.Execute();
Intellisense Support
outside users of the function see unnamed parameters - sometimes this is helpfull although it would be nice to be able to name them.
Use a named delegate - I believe this will get you at least some Intellisense for the parameters (probably just the names, less likely XML docs - please correct me if I'm wrong):
public class MyClass
{
public delegate int DoSomethingImpl(int foo, int bizBar);
public DoSomethingImpl DoSomething = (x, y) => { return x + y; }
}
I'd avoid delegate properties/fields as method replacements for public methods. For private methods it's a tool, but not one I use very often.
instance delegate fields have a per instance memory cost. Probably a premature optimization for most classes, but still something to keep in mind.
Your code uses a public mutable field, which can be changed at any time. That hurts encapsulation.
If you use the field initializer syntax, you can't access this. So field initializer syntax is mainly useful for static methods.
Makes static analysis much harder, since the implementation of that method isn't known at compile-time.
There are some cases where delegate properties/fields might be useful:
Handlers of some sort. Especially if multi-casting (and thus the event subscription pattern) doesn't make much sense
Assigning something that can't be easily described by a simple method body. Such as a memoized function.
The delegate is runtime generated or at least its value is only decided at runtime
Using a closure over local variables is an alternative to using a method and private fields. I strongly dislike classes with lots of fields, especially if some of these fields are only used by two methods or less. In these situations, using a delegate in a field can be preferable to conventional methods
class MyClassConventional {
int? someValue; // When Mark() is called, remember the value so that we can do something with it in Process(). Not used in any other method.
int X;
void Mark() {
someValue = X;
}
void Process() {
// Do something with someValue.Value
}
}
class MyClassClosure {
int X;
Action Process = null;
void Mark() {
int someValue = X;
Process = () => { // Do something with someValue };
}
}
This question presents a false dichotomy - between functions, and a delegate with an equivalent signature. The main difference is that one of the two you should only use if there are no other choices. Use this in your day to day work, and it will be thrown out of any code review.
The benefits that have been mentioned are far outweighed by the fact that there is almost never a reason to write code that is so obscure; especially when this code makes it look like you don't know how to program C#.
I urge anyone reading this to ignore any of the benefits which have been stated, since they are all overwhelmed by the fact that this is the kind of code that demonstrates that you do not know how to program in C#.
The only exception to that rule is if you have a need for one of the benefits, and that need can't be satisfied in any other way. In that case, you'll need to write more comment than code to explain why you have a good reason to do it. Be prepared to answer as clearly as Eric Lippert did. You'd better be able to explain as well as Eric does that you can't accomplish your requirements and write understandable code at the same time.

Why we require Generics? [duplicate]

I thought I'd offer this softball to whomever would like to hit it out of the park. What are generics, what are the advantages of generics, why, where, how should I use them? Please keep it fairly basic. Thanks.
Allows you to write code/use library methods which are type-safe, i.e. a List<string> is guaranteed to be a list of strings.
As a result of generics being used the compiler can perform compile-time checks on code for type safety, i.e. are you trying to put an int into that list of strings? Using an ArrayList would cause that to be a less transparent runtime error.
Faster than using objects as it either avoids boxing/unboxing (where .net has to convert value types to reference types or vice-versa) or casting from objects to the required reference type.
Allows you to write code which is applicable to many types with the same underlying behaviour, i.e. a Dictionary<string, int> uses the same underlying code as a Dictionary<DateTime, double>; using generics, the framework team only had to write one piece of code to achieve both results with the aforementioned advantages too.
I really hate to repeat myself. I hate typing the same thing more often than I have to. I don't like restating things multiple times with slight differences.
Instead of creating:
class MyObjectList {
MyObject get(int index) {...}
}
class MyOtherObjectList {
MyOtherObject get(int index) {...}
}
class AnotherObjectList {
AnotherObject get(int index) {...}
}
I can build one reusable class... (in the case where you don't want to use the raw collection for some reason)
class MyList<T> {
T get(int index) { ... }
}
I'm now 3x more efficient and I only have to maintain one copy. Why WOULDN'T you want to maintain less code?
This is also true for non-collection classes such as a Callable<T> or a Reference<T> that has to interact with other classes. Do you really want to extend Callable<T> and Future<T> and every other associated class to create type-safe versions?
I don't.
Not needing to typecast is one of the biggest advantages of Java generics, as it will perform type checking at compile-time. This will reduce the possibility of ClassCastExceptions which can be thrown at runtime, and can lead to more robust code.
But I suspect that you're fully aware of that.
Every time I look at Generics it gives
me a headache. I find the best part of
Java to be it's simplicity and minimal
syntax and generics are not simple and
add a significant amount of new
syntax.
At first, I didn't see the benefit of generics either. I started learning Java from the 1.4 syntax (even though Java 5 was out at the time) and when I encountered generics, I felt that it was more code to write, and I really didn't understand the benefits.
Modern IDEs make writing code with generics easier.
Most modern, decent IDEs are smart enough to assist with writing code with generics, especially with code completion.
Here's an example of making an Map<String, Integer> with a HashMap. The code I would have to type in is:
Map<String, Integer> m = new HashMap<String, Integer>();
And indeed, that's a lot to type just to make a new HashMap. However, in reality, I only had to type this much before Eclipse knew what I needed:
Map<String, Integer> m = new Ha Ctrl+Space
True, I did need to select HashMap from a list of candidates, but basically the IDE knew what to add, including the generic types. With the right tools, using generics isn't too bad.
In addition, since the types are known, when retrieving elements from the generic collection, the IDE will act as if that object is already an object of its declared type -- there is no need to casting for the IDE to know what the object's type is.
A key advantage of generics comes from the way it plays well with new Java 5 features. Here's an example of tossing integers in to a Set and calculating its total:
Set<Integer> set = new HashSet<Integer>();
set.add(10);
set.add(42);
int total = 0;
for (int i : set) {
total += i;
}
In that piece of code, there are three new Java 5 features present:
Generics
Autoboxing and unboxing
For-each loop
First, generics and autoboxing of primitives allow the following lines:
set.add(10);
set.add(42);
The integer 10 is autoboxed into an Integer with the value of 10. (And same for 42). Then that Integer is tossed into the Set which is known to hold Integers. Trying to throw in a String would cause a compile error.
Next, for for-each loop takes all three of those:
for (int i : set) {
total += i;
}
First, the Set containing Integers are used in a for-each loop. Each element is declared to be an int and that is allowed as the Integer is unboxed back to the primitive int. And the fact that this unboxing occurs is known because generics was used to specify that there were Integers held in the Set.
Generics can be the glue that brings together the new features introduced in Java 5, and it just makes coding simpler and safer. And most of the time IDEs are smart enough to help you with good suggestions, so generally, it won't a whole lot more typing.
And frankly, as can be seen from the Set example, I feel that utilizing Java 5 features can make the code more concise and robust.
Edit - An example without generics
The following is an illustration of the above Set example without the use of generics. It is possible, but isn't exactly pleasant:
Set set = new HashSet();
set.add(10);
set.add(42);
int total = 0;
for (Object o : set) {
total += (Integer)o;
}
(Note: The above code will generate unchecked conversion warning at compile-time.)
When using non-generics collections, the types that are entered into the collection is objects of type Object. Therefore, in this example, a Object is what is being added into the set.
set.add(10);
set.add(42);
In the above lines, autoboxing is in play -- the primitive int value 10 and 42 are being autoboxed into Integer objects, which are being added to the Set. However, keep in mind, the Integer objects are being handled as Objects, as there are no type information to help the compiler know what type the Set should expect.
for (Object o : set) {
This is the part that is crucial. The reason the for-each loop works is because the Set implements the Iterable interface, which returns an Iterator with type information, if present. (Iterator<T>, that is.)
However, since there is no type information, the Set will return an Iterator which will return the values in the Set as Objects, and that is why the element being retrieved in the for-each loop must be of type Object.
Now that the Object is retrieved from the Set, it needs to be cast to an Integer manually to perform the addition:
total += (Integer)o;
Here, a typecast is performed from an Object to an Integer. In this case, we know this will always work, but manual typecasting always makes me feel it is fragile code that could be damaged if a minor change is made else where. (I feel that every typecast is a ClassCastException waiting to happen, but I digress...)
The Integer is now unboxed into an int and allowed to perform the addition into the int variable total.
I hope I could illustrate that the new features of Java 5 is possible to use with non-generic code, but it just isn't as clean and straight-forward as writing code with generics. And, in my opinion, to take full advantage of the new features in Java 5, one should be looking into generics, if at the very least, allows for compile-time checks to prevent invalid typecasts to throw exceptions at runtime.
If you were to search the Java bug database just before 1.5 was released, you'd find seven times more bugs with NullPointerException than ClassCastException. So it doesn't seem that it is a great feature to find bugs, or at least bugs that persist after a little smoke testing.
For me the huge advantage of generics is that they document in code important type information. If I didn't want that type information documented in code, then I'd use a dynamically typed language, or at least a language with more implicit type inference.
Keeping an object's collections to itself isn't a bad style (but then the common style is to effectively ignore encapsulation). It rather depends upon what you are doing. Passing collections to "algorithms" is slightly easier to check (at or before compile-time) with generics.
Generics in Java facilitate parametric polymorphism. By means of type parameters, you can pass arguments to types. Just as a method like String foo(String s) models some behaviour, not just for a particular string, but for any string s, so a type like List<T> models some behaviour, not just for a specific type, but for any type. List<T> says that for any type T, there's a type of List whose elements are Ts. So List is a actually a type constructor. It takes a type as an argument and constructs another type as a result.
Here are a couple of examples of generic types I use every day. First, a very useful generic interface:
public interface F<A, B> {
public B f(A a);
}
This interface says that for some two types, A and B, there's a function (called f) that takes an A and returns a B. When you implement this interface, A and B can be any types you want, as long as you provide a function f that takes the former and returns the latter. Here's an example implementation of the interface:
F<Integer, String> intToString = new F<Integer, String>() {
public String f(int i) {
return String.valueOf(i);
}
}
Before generics, polymorphism was achieved by subclassing using the extends keyword. With generics, we can actually do away with subclassing and use parametric polymorphism instead. For example, consider a parameterised (generic) class used to calculate hash codes for any type. Instead of overriding Object.hashCode(), we would use a generic class like this:
public final class Hash<A> {
private final F<A, Integer> hashFunction;
public Hash(final F<A, Integer> f) {
this.hashFunction = f;
}
public int hash(A a) {
return hashFunction.f(a);
}
}
This is much more flexible than using inheritance, because we can stay with the theme of using composition and parametric polymorphism without locking down brittle hierarchies.
Java's generics are not perfect though. You can abstract over types, but you can't abstract over type constructors, for example. That is, you can say "for any type T", but you can't say "for any type T that takes a type parameter A".
I wrote an article about these limits of Java generics, here.
One huge win with generics is that they let you avoid subclassing. Subclassing tends to result in brittle class hierarchies that are awkward to extend, and classes that are difficult to understand individually without looking at the entire hierarchy.
Wereas before generics you might have classes like Widget extended by FooWidget, BarWidget, and BazWidget, with generics you can have a single generic class Widget<A> that takes a Foo, Bar or Baz in its constructor to give you Widget<Foo>, Widget<Bar>, and Widget<Baz>.
Generics avoid the performance hit of boxing and unboxing. Basically, look at ArrayList vs List<T>. Both do the same core things, but List<T> will be a lot faster because you don't have to box to/from object.
The best benefit to Generics is code reuse. Lets say that you have a lot of business objects, and you are going to write VERY similar code for each entity to perform the same actions. (I.E Linq to SQL operations).
With generics, you can create a class that will be able to operate given any of the types that inherit from a given base class or implement a given interface like so:
public interface IEntity
{
}
public class Employee : IEntity
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int EmployeeID { get; set; }
}
public class Company : IEntity
{
public string Name { get; set; }
public string TaxID { get; set }
}
public class DataService<ENTITY, DATACONTEXT>
where ENTITY : class, IEntity, new()
where DATACONTEXT : DataContext, new()
{
public void Create(List<ENTITY> entities)
{
using (DATACONTEXT db = new DATACONTEXT())
{
Table<ENTITY> table = db.GetTable<ENTITY>();
foreach (ENTITY entity in entities)
table.InsertOnSubmit (entity);
db.SubmitChanges();
}
}
}
public class MyTest
{
public void DoSomething()
{
var dataService = new DataService<Employee, MyDataContext>();
dataService.Create(new Employee { FirstName = "Bob", LastName = "Smith", EmployeeID = 5 });
var otherDataService = new DataService<Company, MyDataContext>();
otherDataService.Create(new Company { Name = "ACME", TaxID = "123-111-2233" });
}
}
Notice the reuse of the same service given the different Types in the DoSomething method above. Truly elegant!
There's many other great reasons to use generics for your work, this is my favorite.
I just like them because they give you a quick way to define a custom type (as I use them anyway).
So for example instead of defining a structure consisting of a string and an integer, and then having to implement a whole set of objects and methods on how to access an array of those structures and so forth, you can just make a Dictionary
Dictionary<int, string> dictionary = new Dictionary<int, string>();
And the compiler/IDE does the rest of the heavy lifting. A Dictionary in particular lets you use the first type as a key (no repeated values).
Typed collections - even if you don't want to use them you're likely to have to deal with them from other libraries , other sources.
Generic typing in class creation:
public class Foo < T> {
public T get()...
Avoidance of casting - I've always disliked things like
new Comparator {
public int compareTo(Object o){
if (o instanceof classIcareAbout)...
Where you're essentially checking for a condition that should only exist because the interface is expressed in terms of objects.
My initial reaction to generics was similar to yours - "too messy, too complicated". My experience is that after using them for a bit you get used to them, and code without them feels less clearly specified, and just less comfortable. Aside from that, the rest of the java world uses them so you're going to have to get with the program eventually, right?
To give a good example. Imagine you have a class called Foo
public class Foo
{
public string Bar() { return "Bar"; }
}
Example 1
Now you want to have a collection of Foo objects. You have two options, LIst or ArrayList, both of which work in a similar manner.
Arraylist al = new ArrayList();
List<Foo> fl = new List<Foo>();
//code to add Foos
al.Add(new Foo());
f1.Add(new Foo());
In the above code, if I try to add a class of FireTruck instead of Foo, the ArrayList will add it, but the Generic List of Foo will cause an exception to be thrown.
Example two.
Now you have your two array lists and you want to call the Bar() function on each. Since hte ArrayList is filled with Objects, you have to cast them before you can call bar. But since the Generic List of Foo can only contain Foos, you can call Bar() directly on those.
foreach(object o in al)
{
Foo f = (Foo)o;
f.Bar();
}
foreach(Foo f in fl)
{
f.Bar();
}
Haven't you ever written a method (or a class) where the key concept of the method/class wasn't tightly bound to a specific data type of the parameters/instance variables (think linked list, max/min functions, binary search, etc.).
Haven't you ever wish you could reuse the algorthm/code without resorting to cut-n-paste reuse or compromising strong-typing (e.g. I want a List of Strings, not a List of things I hope are strings!)?
That's why you should want to use generics (or something better).
The primary advantage, as Mitchel points out, is strong-typing without needing to define multiple classes.
This way you can do stuff like:
List<SomeCustomClass> blah = new List<SomeCustomClass>();
blah[0].SomeCustomFunction();
Without generics, you would have to cast blah[0] to the correct type to access its functions.
Don't forget that generics aren't just used by classes, they can also be used by methods. For example, take the following snippet:
private <T extends Throwable> T logAndReturn(T t) {
logThrowable(t); // some logging method that takes a Throwable
return t;
}
It is simple, but can be used very elegantly. The nice thing is that the method returns whatever it was that it was given. This helps out when you are handling exceptions that need to be re-thrown back to the caller:
...
} catch (MyException e) {
throw logAndReturn(e);
}
The point is that you don't lose the type by passing it through a method. You can throw the correct type of exception instead of just a Throwable, which would be all you could do without generics.
This is just a simple example of one use for generic methods. There are quite a few other neat things you can do with generic methods. The coolest, in my opinion, is type inferring with generics. Take the following example (taken from Josh Bloch's Effective Java 2nd Edition):
...
Map<String, Integer> myMap = createHashMap();
...
public <K, V> Map<K, V> createHashMap() {
return new HashMap<K, V>();
}
This doesn't do a lot, but it does cut down on some clutter when the generic types are long (or nested; i.e. Map<String, List<String>>).
Generics allow you to create objects that are strongly typed, yet you don't have to define the specific type. I think the best useful example is the List and similar classes.
Using the generic list you can have a List List List whatever you want and you can always reference the strong typing, you don't have to convert or anything like you would with a Array or standard List.
the jvm casts anyway... it implicitly creates code which treats the generic type as "Object" and creates casts to the desired instantiation. Java generics are just syntactic sugar.
I know this is a C# question, but generics are used in other languages too, and their use/goals are quite similar.
Java collections use generics since Java 1.5. So, a good place to use them is when you are creating your own collection-like object.
An example I see almost everywhere is a Pair class, which holds two objects, but needs to deal with those objects in a generic way.
class Pair<F, S> {
public final F first;
public final S second;
public Pair(F f, S s)
{
first = f;
second = s;
}
}
Whenever you use this Pair class you can specify which kind of objects you want it to deal with and any type cast problems will show up at compile time, rather than runtime.
Generics can also have their bounds defined with the keywords 'super' and 'extends'. For example, if you want to deal with a generic type but you want to make sure it extends a class called Foo (which has a setTitle method):
public class FooManager <F extends Foo>{
public void setTitle(F foo, String title) {
foo.setTitle(title);
}
}
While not very interesting on its own, it's useful to know that whenever you deal with a FooManager, you know that it will handle MyClass types, and that MyClass extends Foo.
From the Sun Java documentation, in response to "why should i use generics?":
"Generics provides a way for you to communicate the type of a collection to the compiler, so that it can be checked. Once the compiler knows the element type of the collection, the compiler can check that you have used the collection consistently and can insert the correct casts on values being taken out of the collection... The code using generics is clearer and safer.... the compiler can verify at compile time that the type constraints are not violated at run time [emphasis mine]. Because the program compiles without warnings, we can state with certainty that it will not throw a ClassCastException at run time. The net effect of using generics, especially in large programs, is improved readability and robustness. [emphasis mine]"
Generics let you use strong typing for objects and data structures that should be able to hold any object. It also eliminates tedious and expensive typecasts when retrieving objects from generic structures (boxing/unboxing).
One example that uses both is a linked list. What good would a linked list class be if it could only use object Foo? To implement a linked list that can handle any kind of object, the linked list and the nodes in a hypothetical node inner class must be generic if you want the list to contain only one type of object.
If your collection contains value types, they don't need to box/unbox to objects when inserted into the collection so your performance increases dramatically. Cool add-ons like resharper can generate more code for you, like foreach loops.
Another advantage of using Generics (especially with Collections/Lists) is you get Compile Time Type Checking. This is really useful when using a Generic List instead of a List of Objects.
Single most reason is they provide Type safety
List<Customer> custCollection = new List<Customer>;
as opposed to,
object[] custCollection = new object[] { cust1, cust2 };
as a simple example.
In summary, generics allow you to specify more precisily what you intend to do (stronger typing).
This has several benefits for you:
Because the compiler knows more about what you want to do, it allows you to omit a lot of type-casting because it already knows that the type will be compatible.
This also gets you earlier feedback about the correctnes of your program. Things that previously would have failed at runtime (e.g. because an object couldn't be casted in the desired type), now fail at compile-time and you can fix the mistake before your testing-department files a cryptical bug report.
The compiler can do more optimizations, like avoiding boxing, etc.
A couple of things to add/expand on (speaking from the .NET point of view):
Generic types allow you to create role-based classes and interfaces. This has been said already in more basic terms, but I find you start to design your code with classes which are implemented in a type-agnostic way - which results in highly reusable code.
Generic arguments on methods can do the same thing, but they also help apply the "Tell Don't Ask" principle to casting, i.e. "give me what I want, and if you can't, you tell me why".
I use them for example in a GenericDao implemented with SpringORM and Hibernate which look like this
public abstract class GenericDaoHibernateImpl<T>
extends HibernateDaoSupport {
private Class<T> type;
public GenericDaoHibernateImpl(Class<T> clazz) {
type = clazz;
}
public void update(T object) {
getHibernateTemplate().update(object);
}
#SuppressWarnings("unchecked")
public Integer count() {
return ((Integer) getHibernateTemplate().execute(
new HibernateCallback() {
public Object doInHibernate(Session session) {
// Code in Hibernate for getting the count
}
}));
}
.
.
.
}
By using generics my implementations of this DAOs force the developer to pass them just the entities they are designed for by just subclassing the GenericDao
public class UserDaoHibernateImpl extends GenericDaoHibernateImpl<User> {
public UserDaoHibernateImpl() {
super(User.class); // This is for giving Hibernate a .class
// work with, as generics disappear at runtime
}
// Entity specific methods here
}
My little framework is more robust (have things like filtering, lazy-loading, searching). I just simplified here to give you an example
I, like Steve and you, said at the beginning "Too messy and complicated" but now I see its advantages
Obvious benefits like "type safety" and "no casting" are already mentioned so maybe I can talk about some other "benefits" which I hope it helps.
First of all, generics is a language-independent concept and , IMO, it might make more sense if you think about regular (runtime) polymorphism at the same time.
For example, the polymorphism as we know from object oriented design has a runtime notion in where the caller object is figured out at runtime as program execution goes and the relevant method gets called accordingly depending on the runtime type. In generics, the idea is somewhat similar but everything happens at compile time. What does that mean and how you make use of it?
(Let's stick with generic methods to keep it compact) It means that you can still have the same method on separate classes (like you did previously in polymorphic classes) but this time they're auto-generated by the compiler depend on the types set at compile time. You parametrise your methods on the type you give at compile time. So, instead of writing the methods from scratch for every single type you have as you do in runtime polymorphism (method overriding), you let compilers do the work during compilation. This has an obvious advantage since you don't need to infer all possible types that might be used in your system which makes it far more scalable without a code change.
Classes work the pretty much same way. You parametrise the type and the code is generated by the compiler.
Once you get the idea of "compile time", you can make use "bounded" types and restrict what can be passed as a parametrised type through classes/methods. So, you can control what to be passed through which is a powerful thing especially you've a framework being consumed by other people.
public interface Foo<T extends MyObject> extends Hoo<T>{
...
}
No one can set sth other than MyObject now.
Also, you can "enforce" type constraints on your method arguments which means you can make sure both your method arguments would depend on the same type.
public <T extends MyObject> foo(T t1, T t2){
...
}
Hope all of this makes sense.
I once gave a talk on this topic. You can find my slides, code, and audio recording at http://www.adventuresinsoftware.com/generics/.
Using generics for collections is just simple and clean. Even if you punt on it everywhere else, the gain from the collections is a win to me.
List<Stuff> stuffList = getStuff();
for(Stuff stuff : stuffList) {
stuff.do();
}
vs
List stuffList = getStuff();
Iterator i = stuffList.iterator();
while(i.hasNext()) {
Stuff stuff = (Stuff)i.next();
stuff.do();
}
or
List stuffList = getStuff();
for(int i = 0; i < stuffList.size(); i++) {
Stuff stuff = (Stuff)stuffList.get(i);
stuff.do();
}
That alone is worth the marginal "cost" of generics, and you don't have to be a generic Guru to use this and get value.
Generics also give you the ability to create more reusable objects/methods while still providing type specific support. You also gain a lot of performance in some cases. I don't know the full spec on the Java Generics, but in .NET I can specify constraints on the Type parameter, like Implements a Interface, Constructor , and Derivation.
Enabling programmers to implement generic algorithms - By using generics, programmers can implement generic algorithms that work on collections of different types, can be customized, and are type-safe and easier to read.
Stronger type checks at compile time - A Java compiler applies strong type checking to generic code and issues errors if the code violates type safety. Fixing compile-time errors is easier than fixing runtime errors, which can be difficult to find.
Elimination of casts.

Good practice to create extension methods that apply to System.Object?

I'm wondering whether I should create extension methods that apply on the object level or whether they should be located at a lower point in the class hierarchy. What I mean is something along the lines of:
public static string SafeToString(this Object o) {
if (o == null || o is System.DBNull)
return "";
else {
if (o is string)
return (string)o;
else
return "";
}
}
public static int SafeToInt(this Object o) {
if (o == null || o is System.DBNull)
return 0;
else {
if (o.IsNumeric())
return Convert.ToInt32(o);
else
return 0;
}
}
//same for double.. etc
I wrote those methods since I have to deal a lot with database data (From the OleDbDataReader) that can be null (shouldn't, though) since the underlying database is unfortunately very liberal with columns that may be null. And to make my life a little easier, I came up with those extension methods.
What I'd like to know is whether this is good style, acceptable style or bad style. I kinda have my worries about it since it kinda "pollutes" the Object-class.
Thank you in advance & Best Regards :)
Christian
P.S. I didn't tag it as "subjective" intentionally.
No, that is not good practice. You want to apply extension methods at the lowest possible point. I believe there a time and a place for (almost) everything, but extension methods System.Object would almost never be appropriate. You should be able to apply extension methods such as this much further down the inheritance stack. Otherwise it will clutter your intellisense and probably end up being used/depended-upon incorrectly by other developers.
However, extension methods for data objects for dealing with Null values is a very good use of extension methods. Consider putting them right on you OleDbDataReader. I have a generic extension method called ValueOrDefault that . . . well, I'll just show it to you:
<Extension()> _
Public Function ValueOrDefault(Of T)(ByVal r As DataRow, ByVal fieldName As String) As T
If r.IsNull(fieldName) Then
If GetType(T) Is GetType(String) Then
Return CType(CType("", Object), T)
Else
Return Nothing
End If
Else
Return CType(r.Item(fieldName), T)
End If
End Function
That's VB, but you get the picture. This sucker saves me a ton of time and really makes for clean code when reading out of a datarow. You are on the right track, but your sense of spell is correct: you have the extension methods too high.
Putting the extension methods into a separate namespace is better than nothing (this is a perfectly valid use of namespaces; Linq uses this), but you shouldn't have to. To make these methods apply to various db objects, apply the extension methods to IDataRecord.
Extension method pollution of System.Object can be quite annoying in the general case, but you can place these extension methods in a separate namespace so that developers must actively opt in to use those methods.
If you couple this with code that follows the Single Responsibility Principle, you should only have to import this namespace in relatively few classes.
Under such circumstances, such extension methods may be acceptable.
An excerpt from the book "Framework design guidelines"
Avoid defining extension methods on
System.Object, unless absolutely
necessary. When doing so, be aware
that VB users will not be able to use
thus-defined extension methods and, as
such, they will not be able to take
advantage of usability/syntax benefits
that come with extension methods.
This is because, in VB, declaring a
variable as object forces all method
invocations on it to be late bound –
while bindings to extension methods
are compile-time determined (early
bound). For example:
public static class SomeExtensions{
static void Foo(this object o){…} } … Object o = … o.Foo();
In this example, the call to Foo will fail in
VB. Instead, the VB syntax should
simply be: SomeExtensions.Foo(o)
Note that the guideline applies to
other languages where the same binding
behavior is present, or where
extension methods are not supported
The Framework Design Guidelines advice you not to do this. However, these guidelines are especially for frameworks, so if you find it very useful in your (line of) business application, please do so.
But please be aware that adding these extension methods on object might clutter IntelliSense and might confuse other developers. They might not expect to see those methods. In this case, just use old fashion static methods :-)
One thing I personally find troubling about sprinkling extension methods everywhere it that my CPU (my brain) is trained to find possible `NullReferenceException`s. Because an extension method looks like an instance method, my brain often receives a `PossibleUseOfNullObject` interrupt by the *source code parser* when reading such code. In that case I have to analyze whether a `NullReferenceException` can actually occur or not. This makes reading through the code much harder, because I'm interrupted more often.
For this reason I’m very conservative about using extension methods. But this doesn’t mean I don’t think they’re useful. Hell no! I've even written a library for precondition validation that uses extension methods extensively. I even initially defined extension methods on System.Object when I started writing that library. However, because this is a reusable library and is used by VB developers I decided to remove these extension methods on System.Object.
Perhaps consider adding the extension methods to the IDataRecord interface? (which your OleDbDataReader implements)
public static class DataRecordExtensions
{
public static string GetStringSafely(this IDataRecord record, int index)
{
if (record.IsDBNull(index))
return string.Empty;
return record.GetString(index);
}
public static Guid GetGuidSafely(this IDataRecord record, int index)
{
if (record.IsDBNull(index))
return default(Guid);
return record.GetGuid(index);
}
public static DateTime GetDateTimeSafely(this IDataRecord record, int index)
{
if (record.IsDBNull(index))
return default(DateTime);
return record.GetDateTime(index);
}
}
I know its not best practice, but for my projects I like to include this extension for converting data types, i find it much easier than the Convert class..
public static T ChangeType<T>(this object obj) where T : new()
{
try
{
Type type = typeof(T);
return (T)Convert.ChangeType(obj, type);
}
catch
{
return new T();
}
}
It'll show up in every object but i usually just use this one instead of having a list of extensions..
Works like this...
int i = "32".ChangeType<int>();
bool success = "true".ChangeType<bool>();
In addition to the reasons already mentioned, it should be borne in mind that extension methods on System.Object are not supported in VB (as long as the variable is statically typed as System.Object and if the variable is not statically typed as Object you should better extend another more specific type).
The reason for this limitation is backward compatibility. See this question for further details.

Categories

Resources