First off, sorry for the probably confusing title, feel free to change it whenever somebody knows a more appropriate one.
I'm in the process of porting a math library from C++ to C#. Porting the actual code was done in a couple of days. However, I've been struggling to solve all the bugs introduced by the fact that C# classes are reference types versus C++ classes being value types.
Let me explain: If I have a Class Foo in C++ that has nested class Bar, and if I assign an existing instance of Bar to Foo, I'm essentially copying it, if the C++ compiler knows a way to do so (IF I'm not using pointers, of course)
If I do the same in C# I assign a reference. This leads to incredibly hard to find side effects. If, in C#, my code later modifies my Bar instance after assigning it to Foo, then of course the Foo.Bar objects gets modified as well. Not good.
So my question is: Is there an easy way, or a best practice approach, to find the positions in my code that assign a reference when I was assuming I assigned a copy? I even considered creating a new object every time the getter is called on an object, and copying the values there. Of course this would mean that I can never have a reverence to an object again. Again, not good.
So basically what I'm asking is: How can I find every occurrence of the assign operator (=) where the TARGET is a reference type? It's those lines of code I need to change.
I've tried to do it manually by simply searching for the '=' character, but after searching thousands and thousands lines of code, but I'm still missing the occasional assignment target.
You should look into the struct feature of C# instead of class. Structs are value types.
I want to declare a COM Interface in MIDL that allows for returning a pointer (like in the ID3D11Blob). I understand that pointers are a special thing in COM because of the stubs generated for RPC calls. I do not need RPC, but only want to access the COM server from C#. The question is: can I declare the interface in such a way that the C# stub returns an IntPtr? I have tried to add [local] to enable void pointers, but that does not suffice.
The interface should look in MIDL like
[local] void *PeekData(void)
and in C# like
IntPtr PeekData()
Is this possible? If so, how?
Thanks in advance,
Christoph
Edit: To rephrase the question: Why is
HRESULT GetData([in, out, size_is(*size)] BYTE data[], [in, out] ULONG *size);
becoming
void GetData(ref byte, ref uint)
and how can I avoid the first parameter becoming a single byte in C#?
This goes wrong because you imported the COM server declarations from a type library. Type libraries were originally designed to support a sub-set of COM originally called "OLE Automation". Which restricts the kind of types you can use for method arguments. In particular, raw pointers are not permitted. An array must be declared as a SAFEARRAY. Which ensures that the caller can always index an array safely, safe arrays have extra metadata that describes the rank and the lower/upper bounds of the array.
The [size_is] attribute is only understood by MIDL, it is used to create the proxy and the stub for the interface. Knowing how many elements the array contains is also important when it needs to be copied into an interop packet that's sent on the wire to the stub.
Since type libraries don't support a declaration like this, the [size_is] attribute is stripped and the type library importer only sees BYTE*. Which is ambiguous, that can be a byte passed by reference or it can be a pointer to an array of bytes. The importer chooses the former since it has no hope of making an array work, it doesn't know the size of the array. So you get ref byte.
To fix this issue, you have to alter the import library so you can provide the proper declaration of the method. Which requires the [MarshalAs] attribute to declare the byte[] argument an LPArray with the SizeParamIndex property set so you can tell the CLR that the array size is determined by the size argument. There are two basic ways to go about it:
Decompile the interop library with ildasm.exe, modify the .il file and put it back together with ilasm.exe. You'd use a sample C# declaration that you look at with ildasm.exe to know how to edit the IL. This is the approach that Microsoft recommends.
Use a good decompiler that can decompile IL back to C#. Reflector and ILSpy are popular. Copy/paste the generated code into a source file of your project and edit the method, applying the [MarshalAs] attribute. Advantage is that editing is easier and you no longer have a dependency on the interop library anymore.
In either case, you want to make sure that the COM server is stable so you don't have to do this very often. If it is not then modifying the server itself is highly recommended, use a safe array.
I think I found the solution on http://msdn.microsoft.com/en-gb/library/z6cfh6e6(v=vs.110).aspx#cpcondefaultmarshalingforarraysanchor2: This is the default behaviour for C-style arrays. One can avoid that by using SAFEARRAYs.
In one C# maintenance project I came across following variable declaration:
Int32* iProgressAddress;
Is it pointer declaration in C#?
I thought that there is no pointer concept in C#, what does that statement mean?
C# does support pointers, but it's limited to pointing to primitive data types that are unmanaged types, such as ints, floats, enums, and other pointer types (plus the rest of the primitives).
edit: as well as value types
Yes, it is.
Notice, that the method is marked unsafe. As well as the assembly.
There is a lot of things to know before using pointers from the managed code.
For instance, pointer pinning.
I'm sorry, I'm afraid that's a pointer, and you have to get used to it.
REALLY! Pointers aren't that scary. :)
References in C# are quite similar to those on C++, except that they are garbage collected.
Why is it then so difficult for the C# compiler to support the following:
Members functions marked const.
References to data types (other than string) marked const, through which only const member functions can be called ?
I believe it would be really useful if C# supported this. For one, it'll really help the seemingly widespread gay abandon with which C# programmers return naked references to private data (at least that's what I've seen at my workplace).
Or is there already something equivalent in C# which I'm missing? (I know about the readonly and const keywords, but they don't really serve the above purpose)
I suspect there are some practical reasons, and some theoretical reasons:
Should the constness apply to the object or the reference? If it's in the reference, should this be compile-time only, or as a bit within the reference itself? Can something else which has a non-const reference to the same object fiddle with it under the hood?
Would you want to be able to cast it away as you can in C++? That doesn't sound very much like something you'd want on a managed platform... but what about all those times where it makes sense in C++?
Syntax gets tricky (IMO) when you have more than one type involved in a declaration - think arrays, generics etc. It can become hard to work out exactly which bit is const.
If you can't cast it away, everyone has to get it right. In other words, both the .NET framework types and any other 3rd party libraries you use all have to do the right thing, or you're left with nasty situations where your code can't do the right thing because of a subtle problem with constness.
There's a big one in terms of why it can't be supported now though:
Backwards compatibility: there's no way all libraries would be correctly migrated to it, making it pretty much useless :(
I agree it would be useful to have some sort of constness indicator, but I can't see it happening, I'm afraid.
EDIT: There's been an argument about this raging in the Java community for ages. There's rather a lot of commentary on the relevant bug which you may find interesting.
As Jon already covered (of course) const correctness is not as simple as it might appear. C++ does it one way. D does it another (arguably more correct/ useful) way. C# flirts with it but doesn't do anything more daring, as you have discovered (and likely never well, as Jon well covered again).
That said, I believe that many of Jon's "theoretical reasons" are resolved in D's model.
In D (2.0), const works much like C++, except that it is fully transitive (so const applied to a pointer would apply to the object pointed to, any members of that object, any pointers that object had, objects they pointed to etc) - but it is explicit that this only applies from the variable that you have declared const (so if you already have a non-const object and you take a const pointer to it, the non-const variable can still mutate the state).
D introduces another keyword - invariant - which applies to the object itself. This means that nothing can ever change the state once initialised.
The beauty of this arrangement is that a const method can accept both const and invariant objects. Since invariant objects are the bread and butter of the functional world, and const method can be marked as "pure" in the functional sense - even though it may be used with mutable objects.
Getting back on track - I think it's the case that we're only now (latter half of the naughties) understanding how best to use const (and invariant). .Net was originally defined when things were more hazy, so didn't commit to too much - and now it's too late to retrofit.
I'd love to see a port of D run on the .Net VM, though :-)
Mr. Heljsberg, the designer of the C# language has already answered this question:
http://www.artima.com/intv/choicesP.html
I wouldn't be surprised if immutable types were added to a future version of C#.
There have already been moves in that direction with C# 3.0.
Anonymous types, for example, are immutable.
I think, as a result of extensions designed to embrace parallelism, you will be likely to see immutability pop up more and more.
The question is, do we need constness in C#?
I'm pretty sure that the JITter knows that the given method is not going to affect the object itself and performs corresponding optimizations automagically. (maybe by emitting call instead of callvirt ?)
I'm not sure we need those, since most of the pros of constness are performance related, you end up at the point 1.
Besides that, C# has the readonly keyword.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I mostly use Java and generics are relatively new. I keep reading that Java made the wrong decision or that .NET has better implementations etc. etc.
So, what are the main differences between C++, C#, Java in generics? Pros/cons of each?
I'll add my voice to the noise and take a stab at making things clear:
C# Generics allow you to declare something like this.
List<Person> foo = new List<Person>();
and then the compiler will prevent you from putting things that aren't Person into the list.
Behind the scenes the C# compiler is just putting List<Person> into the .NET dll file, but at runtime the JIT compiler goes and builds a new set of code, as if you had written a special list class just for containing people - something like ListOfPerson.
The benefit of this is that it makes it really fast. There's no casting or any other stuff, and because the dll contains the information that this is a List of Person, other code that looks at it later on using reflection can tell that it contains Person objects (so you get intellisense and so on).
The downside of this is that old C# 1.0 and 1.1 code (before they added generics) doesn't understand these new List<something>, so you have to manually convert things back to plain old List to interoperate with them. This is not that big of a problem, because C# 2.0 binary code is not backwards compatible. The only time this will ever happen is if you're upgrading some old C# 1.0/1.1 code to C# 2.0
Java Generics allow you to declare something like this.
ArrayList<Person> foo = new ArrayList<Person>();
On the surface it looks the same, and it sort-of is. The compiler will also prevent you from putting things that aren't Person into the list.
The difference is what happens behind the scenes. Unlike C#, Java does not go and build a special ListOfPerson - it just uses the plain old ArrayList which has always been in Java. When you get things out of the array, the usual Person p = (Person)foo.get(1); casting-dance still has to be done. The compiler is saving you the key-presses, but the speed hit/casting is still incurred just like it always was.
When people mention "Type Erasure" this is what they're talking about. The compiler inserts the casts for you, and then 'erases' the fact that it's meant to be a list of Person not just Object
The benefit of this approach is that old code which doesn't understand generics doesn't have to care. It's still dealing with the same old ArrayList as it always has. This is more important in the java world because they wanted to support compiling code using Java 5 with generics, and having it run on old 1.4 or previous JVM's, which microsoft deliberately decided not to bother with.
The downside is the speed hit I mentioned previously, and also because there is no ListOfPerson pseudo-class or anything like that going into the .class files, code that looks at it later on (with reflection, or if you pull it out of another collection where it's been converted into Object or so on) can't tell in any way that it's meant to be a list containing only Person and not just any other array list.
C++ Templates allow you to declare something like this
std::list<Person>* foo = new std::list<Person>();
It looks like C# and Java generics, and it will do what you think it should do, but behind the scenes different things are happening.
It has the most in common with C# generics in that it builds special pseudo-classes rather than just throwing the type information away like java does, but it's a whole different kettle of fish.
Both C# and Java produce output which is designed for virtual machines. If you write some code which has a Person class in it, in both cases some information about a Person class will go into the .dll or .class file, and the JVM/CLR will do stuff with this.
C++ produces raw x86 binary code. Everything is not an object, and there's no underlying virtual machine which needs to know about a Person class. There's no boxing or unboxing, and functions don't have to belong to classes, or indeed anything.
Because of this, the C++ compiler places no restrictions on what you can do with templates - basically any code you could write manually, you can get templates to write for you.
The most obvious example is adding things:
In C# and Java, the generics system needs to know what methods are available for a class, and it needs to pass this down to the virtual machine. The only way to tell it this is by either hard-coding the actual class in, or using interfaces. For example:
string addNames<T>( T first, T second ) { return first.Name() + second.Name(); }
That code won't compile in C# or Java, because it doesn't know that the type T actually provides a method called Name(). You have to tell it - in C# like this:
interface IHasName{ string Name(); };
string addNames<T>( T first, T second ) where T : IHasName { .... }
And then you have to make sure the things you pass to addNames implement the IHasName interface and so on. The java syntax is different (<T extends IHasName>), but it suffers from the same problems.
The 'classic' case for this problem is trying to write a function which does this
string addNames<T>( T first, T second ) { return first + second; }
You can't actually write this code because there are no ways to declare an interface with the + method in it. You fail.
C++ suffers from none of these problems. The compiler doesn't care about passing types down to any VM's - if both your objects have a .Name() function, it will compile. If they don't, it won't. Simple.
So, there you have it :-)
C++ rarely uses the “generics” terminology. Instead, the word “templates” is used and is more accurate. Templates describes one technique to achieve a generic design.
C++ templates is very different from what both C# and Java implement for two main reasons. The first reason is that C++ templates don't only allow compile-time type arguments but also compile-time const-value arguments: templates can be given as integers or even function signatures. This means that you can do some quite funky stuff at compile time, e.g. calculations:
template <unsigned int N>
struct product {
static unsigned int const VALUE = N * product<N - 1>::VALUE;
};
template <>
struct product<1> {
static unsigned int const VALUE = 1;
};
// Usage:
unsigned int const p5 = product<5>::VALUE;
This code also uses the other distinguished feature of C++ templates, namely template specialization. The code defines one class template, product that has one value argument. It also defines a specialization for that template that is used whenever the argument evaluates to 1. This allows me to define a recursion over template definitions. I believe that this was first discovered by Andrei Alexandrescu.
Template specialization is important for C++ because it allows for structural differences in data structures. Templates as a whole is a means of unifying an interface across types. However, although this is desirable, all types cannot be treated equally inside the implementation. C++ templates takes this into account. This is very much the same difference that OOP makes between interface and implementation with the overriding of virtual methods.
C++ templates are essential for its algorithmic programming paradigm. For example, almost all algorithms for containers are defined as functions that accept the container type as a template type and treat them uniformly. Actually, that's not quite right: C++ doesn't work on containers but rather on ranges that are defined by two iterators, pointing to the beginning and behind the end of the container. Thus, the whole content is circumscribed by the iterators: begin <= elements < end.
Using iterators instead of containers is useful because it allows to operate on parts of a container instead of on the whole.
Another distinguishing feature of C++ is the possibility of partial specialization for class templates. This is somewhat related to pattern matching on arguments in Haskell and other functional languages. For example, let's consider a class that stores elements:
template <typename T>
class Store { … }; // (1)
This works for any element type. But let's say that we can store pointers more effciently than other types by applying some special trick. We can do this by partially specializing for all pointer types:
template <typename T>
class Store<T*> { … }; // (2)
Now, whenever we instance a container template for one type, the appropriate definition is used:
Store<int> x; // Uses (1)
Store<int*> y; // Uses (2)
Store<string**> z; // Uses (2), with T = string*.
Anders Hejlsberg himself described the differences here "Generics in C#, Java, and C++".
There are already a lot of good answers on what the differences are, so let me give a slightly different perspective and add the why.
As was already explained, the main difference is type erasure, i.e. the fact that the Java compiler erases the generic types and they don't end up in the generated bytecode. However, the question is: why would anyone do that? It doesn't make sense! Or does it?
Well, what's the alternative? If you don't implement generics in the language, where do you implement them? And the answer is: in the Virtual Machine. Which breaks backwards compatibility.
Type erasure, on the other hand, allows you to mix generic clients with non-generic libraries. In other words: code that was compiled on Java 5 can still be deployed to Java 1.4.
Microsoft, however, decided to break backwards compatibility for generics. That's why .NET Generics are "better" than Java Generics.
Of course, Sun aren't idiots or cowards. The reason why they "chickened out", was that Java was significantly older and more widespread than .NET when they introduced generics. (They were introduced roughly at the same time in both worlds.) Breaking backwards compatibility would have been a huge pain.
Put yet another way: in Java, Generics are a part of the Language (which means they apply only to Java, not to other languages), in .NET they are part of the Virtual Machine (which means they apply to all languages, not just C# and Visual Basic.NET).
Compare this with .NET features like LINQ, lambda expressions, local variable type inference, anonymous types and expression trees: these are all language features. That's why there are subtle differences between VB.NET and C#: if those features were part of the VM, they would be the same in all languages. But the CLR hasn't changed: it's still the same in .NET 3.5 SP1 as it was in .NET 2.0. You can compile a C# program that uses LINQ with the .NET 3.5 compiler and still run it on .NET 2.0, provided that you don't use any .NET 3.5 libraries. That would not work with generics and .NET 1.1, but it would work with Java and Java 1.4.
Follow-up to my previous posting.
Templates are one of the main reasons why C++ fails so abysmally at intellisense, regardless of the IDE used. Because of template specialization, the IDE can never be really sure if a given member exists or not. Consider:
template <typename T>
struct X {
void foo() { }
};
template <>
struct X<int> { };
typedef int my_int_type;
X<my_int_type> a;
a.|
Now, the cursor is at the indicated position and it's damn hard for the IDE to say at that point if, and what, members a has. For other languages the parsing would be straightforward but for C++, quite a bit of evaluation is needed beforehand.
It gets worse. What if my_int_type were defined inside a class template as well? Now its type would depend on another type argument. And here, even compilers fail.
template <typename T>
struct Y {
typedef T my_type;
};
X<Y<int>::my_type> b;
After a bit of thinking, a programmer would conclude that this code is the same as the above: Y<int>::my_type resolves to int, therefore b should be the same type as a, right?
Wrong. At the point where the compiler tries to resolve this statement, it doesn't actually know Y<int>::my_type yet! Therefore, it doesn't know that this is a type. It could be something else, e.g. a member function or a field. This might give rise to ambiguities (though not in the present case), therefore the compiler fails. We have to tell it explicitly that we refer to a type name:
X<typename Y<int>::my_type> b;
Now, the code compiles. To see how ambiguities arise from this situation, consider the following code:
Y<int>::my_type(123);
This code statement is perfectly valid and tells C++ to execute the function call to Y<int>::my_type. However, if my_type is not a function but rather a type, this statement would still be valid and perform a special cast (the function-style cast) which is often a constructor invocation. The compiler can't tell which we mean so we have to disambiguate here.
Both Java and C# introduced generics after their first language release. However, there are differences in how the core libraries changed when generics was introduced. C#'s generics are not just compiler magic and so it was not possible to generify existing library classes without breaking backwards compatibility.
For example, in Java the existing Collections Framework was completely genericised. Java does not have both a generic and legacy non-generic version of the collections classes. In some ways this is much cleaner - if you need to use a collection in C# there is really very little reason to go with the non-generic version, but those legacy classes remain in place, cluttering up the landscape.
Another notable difference is the Enum classes in Java and C#. Java's Enum has this somewhat tortuous looking definition:
// java.lang.Enum Definition in Java
public abstract class Enum<E extends Enum<E>> implements Comparable<E>, Serializable {
(see Angelika Langer's very clear explanation of exactly why this is so. Essentially, this means Java can give type safe access from a string to its Enum value:
// Parsing String to Enum in Java
Colour colour = Colour.valueOf("RED");
Compare this to C#'s version:
// Parsing String to Enum in C#
Colour colour = (Colour)Enum.Parse(typeof(Colour), "RED");
As Enum already existed in C# before generics was introduced to the language, the definition could not change without breaking existing code. So, like collections, it remains in the core libraries in this legacy state.
11 months late, but I think this question is ready for some Java Wildcard stuff.
This is a syntactical feature of Java. Suppose you have a method:
public <T> void Foo(Collection<T> thing)
And suppose you don't need to refer to the type T in the method body. You're declaring a name T and then only using it once, so why should you have to think of a name for it? Instead, you can write:
public void Foo(Collection<?> thing)
The question-mark asks the the compiler to pretend that you declared a normal named type parameter that only needs to appear once in that spot.
There's nothing you can do with wildcards that you can't also do with a named type parameter (which is how these things are always done in C++ and C#).
Wikipedia has great write-ups comparing both Java/C# generics and Java generics/C++ templates. The main article on Generics seems a bit cluttered but it does have some good info in it.
The biggest complaint is type erasure. In that, generics are not enforced at runtime. Here's a link to some Sun docs on the subject.
Generics are implemented by type
erasure: generic type information is
present only at compile time, after
which it is erased by the compiler.
C++ templates are actually much more powerful than their C# and Java counterparts as they are evaluated at compile time and support specialization. This allows for Template Meta-Programming and makes the C++ compiler equivalent to a Turing machine (i.e. during the compilation process you can compute anything that is computable with a Turing machine).
In Java, generics are compiler level only, so you get:
a = new ArrayList<String>()
a.getClass() => ArrayList
Note that the type of 'a' is an array list, not a list of strings. So the type of a list of bananas would equal() a list of monkeys.
So to speak.
Looks like, among other very interesting proposals, there is one about refining generics and breaking backwards compatibility:
Currently, generics are implemented
using erasure, which means that the
generic type information is not
available at runtime, which makes some
kind of code hard to write. Generics
were implemented this way to support
backwards compatibility with older
non-generic code. Reified generics
would make the generic type
information available at runtime,
which would break legacy non-generic
code. However, Neal Gafter has
proposed making types reifiable only
if specified, so as to not break
backward compatibility.
at Alex Miller's article about Java 7 Proposals
NB: I don't have enough point to comment, so feel free to move this as a comment to appropriate answer.
Contrary to popular believe, which I never understand where it came from, .net implemented true generics without breaking backward compatibility, and they spent explicit effort for that.
You don't have to change your non-generic .net 1.0 code into generics just to be used in .net 2.0. Both the generic and non-generic lists are still available in .Net framework 2.0 even until 4.0, exactly for nothing else but backward compatibility reason. Therefore old codes that still used non-generic ArrayList will still work, and use the same ArrayList class as before.
Backward code compatibility is always maintained since 1.0 till now... So even in .net 4.0, you still have to option to use any non-generics class from 1.0 BCL if you choose to do so.
So I don't think java has to break backward compatibility to support true generics.