I've used decompiler to get the sources of some library in C#. One of unknown (to me) constructs it produced is the following:
Action action = null;
<>c__DisplayClass9 class2;
action = new Action(class2, (IntPtr) this.<OptimizeVisuals>b__6);
Things like <>c__DisplayClass9 or (IntPtr) this.<OptimizeVisuals>b__6 I just cannot grok.
What's more, this expression cannot be compiled with C# compiler, so I need to come up with something more friendly. Tried to google parts of this, but with no luck. Could you give me some hints? It seems it can have something to do with anonymous methods, but that's my best guess.
Thanks in advance.
EDIT: Maybe my initial question was not very clear about what I need to achieve. So just to reemphasize: I need to convert the code mentioned to the normal C# code, doing the same thing as before de-compilation. My task is to change certain things in the library I'm decompiling, while keeping other functionality (like the mentioned one) intact. This is really important thing for me, so I'd really appreciate if somebody could help me with this.
The compiler generates some members for auto-implemented properties, anonymous methods and such. To prevent collisions with your own names, the compiler uses names that are illegal in C# (but still legal in the CLR).
Related
So I've been assigned to talk about adding new code routines into a program such as self contained functions and new classes, yet I haven't actually been taught this kind of programming terminology yet. I've tried looking online everywhere but it doesn't really explain it well enough.
The questions that I'm a little confused by are the following:
What are self contained functions in C#? (Code examples would help :3)
And how could it be added in a object-oriented way?
Help would be very appreciated, thanks.
Self-contained functions and classes and object-oriented are all pretty much the same thing at the high level you're talking about.
http://en.wikipedia.org/wiki/Object-oriented_programming
I'm guessing that your code base is a mess, with functions using global variables, and giant code files. The goal is to make each item do only one thing. So instead of a function called "Run" which is 500 lines long, you should instead have a function called "Run" which then calls functions "GetRecentData", "CheckDataForErrors", "ReportErrors", "ProcessValidData", and "ReportSuccess". This means that when you need to change the definition of error data, for example, all the related code is neatly in "CheckDataForErrors".
This is a huge topic and you are in way over your head. I'd recommend an object oriented tutorial such as this http://www.blackwasp.co.uk/csharpobjectoriented.aspx or one of many others.
I would percieved self-contained functions as methods that have no outside dependencies (i.e. member variables, properties, etc.)
Translated, methods that do not rely on state.
Just a guess though...
I know generics are in C# to fulfill a role similar to C++ templates but I really need a way to generate some code at compile time - in this particular situation it would be very easy to solve the problem with C++ templates.
Does anyone know of any alternatives? Perhaps a VS plug-in that preprocesses the code or something like that? It doesn't need to be very sophisticated, I just need to generate some methods at compile time.
Here's a very simplified example in C++ (note that this method would be called inside a tight loop with various conditions instead of just "Advanced" and those conditions would change only once per frame - using if's would be too slow and writing all the alternative methods by hand would be impossible to maintain). Also note that performance is very important and that's why I need this to be generated at compile time.
template <bool Advanced>
int TraceRay( Ray r )
{
do
{
if ( WalkAndTestCollision( r ) )
{
if ( Advanced )
return AdvancedShade( collision );
else
return SimpleShade( collision );
}
}
while ( InsideScene( r ) );
}
You can use T4.
EDIT: In your example, you can use a simple bool parameter.
Not really, as far as I know. You can do this type of thing at runtime, of course; a few meta-programming options, none of them trivial:
reflection (the simplest option if you don't need "fastest possible")
CSharpCodeProvider and some code-generation
the same with CodeDom
ILGenerator if you want hardcore
Generics does work as templates, if that the case.
There is a way to create code in runtime -
Check is CodeProject Example:
CodeProject
In addition to Marc's excellent suggestions, you may want to have a look at PostSharp.
I've done some Meta-Programming - style tricks using static generics that use reflection (and now I'm using dynamic code generation using System.Linq.Expressions; as well having used ILGenerator for some more insane stuff). http://www.lordzoltan.org/blog/post/Pseudo-Template-Meta-Programming-in-C-Sharp-Part-2.aspx for an example I put together (sorry about the lack of code formatting - it's a very old post!) that might be of use.
I've also used T4 (link goes to a series of tutorials by my favourite authority on T4 - Oleg Sych), as suggested by SLaks - which is a really nice way to generate code, especially if you're also comfortable with Asp.Net-style syntax. If you generate partial classes using the T4 output, then the developer can then embellish and add to the class however they see fit.
If it absolutely has to be compile-time - then I'd go for T4 (or write your own custom tool, but that's a bit heavy). If not, then a static generic could help, probably in partnership with the kind of solutions mentioned by Marc.
If you want true code generation, you could use CodeSmith http://www.codesmithtools.com which isn't free/included like T4, but has some more features, and can function as a VS.NET plug-in.
Here's an older article that uses genetic programming to generate and compile code on the fly:
http://msdn.microsoft.com/en-us/magazine/cc163934.aspx
"The Generator class is the kernel of the genetic programming application. It discovers available base class terminals and functions. It generates, compiles, and executes C# code to search for a good solution to the problem it is given. The constructor is passed a System.Type which is the root class for .NET reflection operations."
Might be overkill for your situation, but does show what C# can do. (Note this article is from the 1.0 days)
After looking at how Go handles interfaces and liking it, I started thinking about how you could achieve similar duck-typing in C# like this:
var mallard = new Mallard(); // doesn't implement IDuck but has the right methods
IDuck duck = DuckTyper.Adapt<Mallard,IDuck>(mallard);
The DuckTyper.Adapt method would use System.Reflection.Emit to build an adapter on the fly. Maybe somebody has already written something like this. I guess it's not too different from what mocking frameworks already do.
However, this would throw exceptions at run-time if Mallard doesn't actually have the right IDuck methods. To get the error earlier at compile time, I'd have to write a MallardToDuckAdapter which is exactly what I'm trying to avoid.
Is there a better way?
edit: apparently the proper term for what I call "safe duck-typing" is structural typing.
How can you know if a cow walks like a duck and quacks like a duck if you don't have a living, breathing cow in front of you?
Duck-typing is a concept used at run-time. A similar concept at compile-time is structural typing which is AFAIK not supported by the CLR. (The CLR is centred around nominative typing.)
[A structural type system] contrasts with nominative systems, where comparisons are based on explicit declarations or the names of the types, and duck typing, in which only the part of the structure accessed at runtime is checked for compatibility.
The usual way to ensure that duck-typing throws no exception at run-time are unit-tests.
DuckTyping for C#
Reflection.Emit is used to emit IL that directly calls the original object
I don't think this library will give you compile time errors thought, I am not sure that would be entirely feasible. Use Unit Tests to help compensate for that.
I don't think there's another way in which you would get a compile time error.
However, this is something that Unit Testing is great for. You would write a unit test to verify that
DuckTyper.Adapt<Mallard, IDuck>(mallard);
successfully maps.
I know that implicit interfaces (which is what Go interfaces are) were planned for VB 10 (no idea about C#). Unfortunately, they were scrapped before release (I think they didn’t even make it into beta …). It would be nice to see whether they will make an appearance in a future version of .NET.
Of course, the new dynamic types can be used to achieve much the same but this is still not the same – implicit interfaces still allow strong typing, which I find important.
A number of features were introduced into C# 3.0 which made me uneasy, such as object initializers, extension methods and implicitly typed variables. Now in C# 4.0 with things like the dynamic keyword I'm getting even more concerned.
I know that each of these features CAN be used in appropriate ways BUT in my view they make it easier for developers to make bad coding decisions and therefore write worse code. It seems to me that Microsoft are trying to win market share by making the coding easy and undemanding. Personally I prefer a language that is rigorous and places more demands on my coding standards and forces me to structure things in an OOP way.
Here are a few examples of my concerns for the features mentioned above:
Object constructors can do important logic that is not exposed to the consumer. This is in the control of the object developer. Object initializers take this control away and allow the consumer to make the decisions about which fields to initialize.
EDIT: I had not appreciated that you can mix constructor and initializer (my bad) but this starts to look messy to my mind and so I am still not convinced it is a step forward.
Allowing developers to extend built-in types using extension methods allows all and sundry to start adding their favourite pet methods to the string class, which can end up with a bewildering array of options, or requires more policing of coding standards to weed these out.
Allowing implicitly typed variables allows quick and dirty programming instead or properly OOP approaches, which can quickly become an unmanageable mess of vars all over your application.
Are my worries justified?
Object initializers simply allow the client to set properties immediately after construction, no control is relinquished as the caller must still ensure all of the constructor arguments are satisfied.
Personally I feel they add very little:
Person p1 = new Person("Fred");
p1.Age = 30;
p1.Height = 123;
Person p2 = new Person("Fred")
{
Age = 30;
Height = 123;
};
I know a lot of people dislike the 'var' keyword. I can understand why as it is an openly inviting abuse, but I do not mind it providing the type is blindingly obvious:
var p1 = new Person("Fred");
Person p2 = GetPerson();
In the second line above, the type is not obvious, despite the method name. I would use the type in this case.
Extension methods -- I would use very sparingly but they are very useful for extending the .NET types with convenience methods, especially IEnumerable, ICollection and String. String.IsNullOrEmpty() as an extension method is very nice, as it can be called on null references.
I do not think you need to worry, good developers will always use their tools with respect and bad developers will always find ways to misue their tools: limiting the toolset will not solve this problem.
You could limit the features of C# 3.0 your developers can use by writing the restrictions into your coding standards. Then when code is reviewed prior to check in, any code that breaches these rules should be spotted and the check in refused. One such case could well be extension methods.
Obviously, your developers will want to use the new features - all developers do. However, if you've got good, well documented reasons why they shouldn't be used, good developers will follow them. You should also be open to revising these rules as new information comes to light.
With VS 2008 you can specify which version of .NET you want to target (Right click over the solution and select Properties > Application). If you limit yourself to .NET 2.0 then you won't get any of the new features in .NET 3.5. Obviously this doesn't help if you want to use some of the features.
However, I think your fears over vars are unwarranted. C# is still as strongly typed as ever. Declaring something as var simply tells the compiler to pick the best type for this variable. The variable can't change type it's always an int or Person or whatever. Personally I follow the same rules as Paul Ruane; if the type is clear from the syntax then use a var; if not name the type explicitly.
I have seen your position expressed like this:
Build a development environment that
any fool can use and what you get is
many fools developing.
This is very true, but the rest of us look good by contrast. I regard this as a good thing. One of the funniest postings I have ever seen remarked that
VB should not be underestimated. It is an extremely powerful tool for
keeping idiots out of this [C++] newsgroup.
More seriously, whether or not a tools is dangerous depends on the wisdom of the wielder. the only way you can prevent folly is to prevent action. foreach looks innocuous but you can't even remove items as you iterate a collection because removing an item invalidates the iterator. You end up dumping them in favour of a classic for loop.
I think the only really justified issue in your bunch is overuse of extension methods. When important functionality is only accessible through extension methods, sometimes it's hard for everyone in a group to find out about and use that functionality.
Worrying about object initializers and the "var" keyword seems very nitpicky. Both are simple and predictable syntax that can be used to make code more readable, and it's not clear to me how you see them being "abused."
I suggest you address concerns like this through written, agreed-upon coding standards. If nobody can come up with good reasons to use new language features, then there's no need to use them, after all.
Object initializers are just fancy syntax. There is nothing the developer can do with them that he couldn't already do before - they do however save you a few lines of code.
If by "extend built in types" you mean extension methods - again, this is nothing new, just better syntax. The methods are static methods that appear as if they were members. The build in classes remain untouched.
Implicit typed variables are needed for Linq. I also use them with generics that have a lot of type parameters. But I'd agree that I wouldn't like to see them used exclusively.
Of course every feature can be abused but I think that these new features actually let you write code that is easier to read.
One big mitigating factor about var is that it can never move between scopes. It can not be a return type or a parameter type. This makes it far safer in my mind, as it is always tightly typed and always implementation detail of one method.
New features was introduced because Microsoft realized that they are absolutely necessary for implementing new language features. Like LINQ, for example. So you can use the same strategy:
1) know about those features,
2) do not use until you'd find it absolutely necessary for you.
If you really understand those features, I bet you'd feel it necessary pretty soon. :)
At least with "var" and intializers you're not really able to do anything new, it's just a new way to write things. Although it does look like object initializers compile to slightly different IL.
One angle that really blows my mind about extension methods is that you can put them on an interface. That means a class can inherit concrete code by implementing an interface. And since a class can implement multiple interfaces that means, in a roundabout sort of way, that C# now has something like multiple inheritance. So that's a new feature that should definitely be handled with care.
Are my worries justified?
No. This has been another edition of simple answers to simple questions.
References in C# are quite similar to those on C++, except that they are garbage collected.
Why is it then so difficult for the C# compiler to support the following:
Members functions marked const.
References to data types (other than string) marked const, through which only const member functions can be called ?
I believe it would be really useful if C# supported this. For one, it'll really help the seemingly widespread gay abandon with which C# programmers return naked references to private data (at least that's what I've seen at my workplace).
Or is there already something equivalent in C# which I'm missing? (I know about the readonly and const keywords, but they don't really serve the above purpose)
I suspect there are some practical reasons, and some theoretical reasons:
Should the constness apply to the object or the reference? If it's in the reference, should this be compile-time only, or as a bit within the reference itself? Can something else which has a non-const reference to the same object fiddle with it under the hood?
Would you want to be able to cast it away as you can in C++? That doesn't sound very much like something you'd want on a managed platform... but what about all those times where it makes sense in C++?
Syntax gets tricky (IMO) when you have more than one type involved in a declaration - think arrays, generics etc. It can become hard to work out exactly which bit is const.
If you can't cast it away, everyone has to get it right. In other words, both the .NET framework types and any other 3rd party libraries you use all have to do the right thing, or you're left with nasty situations where your code can't do the right thing because of a subtle problem with constness.
There's a big one in terms of why it can't be supported now though:
Backwards compatibility: there's no way all libraries would be correctly migrated to it, making it pretty much useless :(
I agree it would be useful to have some sort of constness indicator, but I can't see it happening, I'm afraid.
EDIT: There's been an argument about this raging in the Java community for ages. There's rather a lot of commentary on the relevant bug which you may find interesting.
As Jon already covered (of course) const correctness is not as simple as it might appear. C++ does it one way. D does it another (arguably more correct/ useful) way. C# flirts with it but doesn't do anything more daring, as you have discovered (and likely never well, as Jon well covered again).
That said, I believe that many of Jon's "theoretical reasons" are resolved in D's model.
In D (2.0), const works much like C++, except that it is fully transitive (so const applied to a pointer would apply to the object pointed to, any members of that object, any pointers that object had, objects they pointed to etc) - but it is explicit that this only applies from the variable that you have declared const (so if you already have a non-const object and you take a const pointer to it, the non-const variable can still mutate the state).
D introduces another keyword - invariant - which applies to the object itself. This means that nothing can ever change the state once initialised.
The beauty of this arrangement is that a const method can accept both const and invariant objects. Since invariant objects are the bread and butter of the functional world, and const method can be marked as "pure" in the functional sense - even though it may be used with mutable objects.
Getting back on track - I think it's the case that we're only now (latter half of the naughties) understanding how best to use const (and invariant). .Net was originally defined when things were more hazy, so didn't commit to too much - and now it's too late to retrofit.
I'd love to see a port of D run on the .Net VM, though :-)
Mr. Heljsberg, the designer of the C# language has already answered this question:
http://www.artima.com/intv/choicesP.html
I wouldn't be surprised if immutable types were added to a future version of C#.
There have already been moves in that direction with C# 3.0.
Anonymous types, for example, are immutable.
I think, as a result of extensions designed to embrace parallelism, you will be likely to see immutability pop up more and more.
The question is, do we need constness in C#?
I'm pretty sure that the JITter knows that the given method is not going to affect the object itself and performs corresponding optimizations automagically. (maybe by emitting call instead of callvirt ?)
I'm not sure we need those, since most of the pros of constness are performance related, you end up at the point 1.
Besides that, C# has the readonly keyword.