Why must parameterless struct constructors be public in C# 6? - c#

Parameterless struct constructors, which so far have been prohibited in C#, are now implemented in the Visual Studio 14 CTP (CTP 4 at the time of writing) as an experimental feature.
However, such parameterless constructors must be public. You cannot make them internal or use any other access modifier.
In the C# Design Notes for Aug 27, 2014, I have found an explanation for this:
C#, VB and F# will all call an accessible parameterless constructor if they find one. If there is one, but it is not accessible, C# and VB will backfill default(T) instead. (F# will complain.)
It is problematic to have successful but different behavior of new S() depending on where you are in the code. To minimize this issue, we should make it so that explicit parameterless constructors have to be public. That way, if you want to replace the “default behavior” you do it everywhere.
What is meant by "depending on where you are in the code", and how does enforcing public parameterless constructors solve the issue?

Where you are in the code means that you will get different behaviors depending on the location of the code that uses a struct.
For instance, if you have a parameterless constructor that's marked as internal, the behavior varies if you create an instance from the same assembly in which the struct is defined or from another assembly.
In first case the parameterless constructor is called because its accessible and in second case default(T) is called. Same situation is enountered for private and protected constructors.
For example, suppose we have two assemblies, A and B:
//Assembly A
public struct SomeStruct
{
public int x = 0;
internal SomeStruct()
{
x = 10;
}
}
public static void DoSomething()
{
var someStruct = new SomeStruct();
Console.WriteLine(someStruct.x); // prints 10
}
//Assembly B
public static void DoAnotherThing()
{
var someStruct = new SomeStruct();
Console.WriteLine(someStruct.x); // prints 0
}
This leads to inconsistent behavior depending on where you are in the code and the reason why they forced using public constructors.

You can be "in a place in the code" in the following meaning:
in the context of your struct, i.e.:
struct X
{
//...
public X CreateNew() { return new X(); }
}
in the context of your assembly:
new X(); // from the same assembly
in another assembly
new X(); // from another assembly
So, the mechanism you speak of, would have to behave differently for exactly the same piece of code (new X();), in each of these contexts. In the first example, you can use the constructor regardless of its access modifier. In the second example, you can access one if it's public or internal. In the third one, it could've been used only if it's public.
Making it public simplifies the situation, as it will be accessible in all of these contexts.

Related

Why won't the DLR (dynamic) bind to a private type? [duplicate]

I just ran into the strangest thing and I'm a bit mind = blown at the moment...
The following program compiles fine but when you run it you get a RuntimeBinderException when you try to read Value. 'object' does not contain a definition for 'Value'
class Program
{
interface IContainer
{
int Value { get; }
}
class Factory
{
class Empty : IContainer
{
public int Value
{
get { return 0; }
}
}
static IContainer nullObj = new Empty();
public IContainer GetContainer()
{
return nullObj;
}
}
static void Main(string[] args)
{
dynamic factory = new Factory();
dynamic container = factory.GetContainer();
var num0 = container.Value; // WTF!? RuntimeBinderException, really?
}
}
Here's the mind blowing part. Move the nested type Factory+Empty outside of the Factory class, like so:
class Empty : IContainer
{
public int Value
{
get { return 0; }
}
}
class Factory...
And the program runs just fine, anyone care to explain why that is?
EDIT
In my adventure of coding I of course did something I should have thought about first. That's why you see me rambling a bit about the difference between class private and internal. This was because I had set the InternalsVisibleToAttribute which made my test project (which was consuming the bits in this instance) behave the way they did, which was all by design, although alluding me from the start.
Read Eric Lippert's answer for a good explanation of the rest.
What caught me really of guard was that the dynamic binder takes the visibility of the type of the instance in mind. I have a lot of JavaScript experience and as a JavaScript programmer where there really isn't such a thing as public or private, I was completely fooled by the fact that the visibility mattered, I mean after all, I was accessing this member as if it was of the public interface type (I thought dynamic was simply syntactic sugar for reflection) but the dynamic binder cannot make such an assumption unless you give it a hint, using a simple cast.
The fundamental principle of "dynamic" in C# is: at runtime do the type analysis of the expression as though the runtime type had been the compile time type. So let's see what would happen if we actually did that:
dynamic num0 = ((Program.Factory.Empty)container).Value;
That program would fail because Empty is not accessible. dynamic will not allow you to do an analysis that would have been illegal in the first place.
However, the runtime analyzer realizes this and decides to cheat a little. It asks itself "is there a base class of Empty that is accessible?" and the answer is obviously yes. So it decides to fall back to the base class and analyzes:
dynamic num0 = ((System.Object)container).Value;
Which fails because that program would give you an "object doesn't have a member called Value" error. Which is the error you are getting.
The dynamic analysis never says "oh, you must have meant"
dynamic num0 = ((Program.IContainer)container).Value;
because of course if that's what you had meant, that's what you would have written in the first place. Again, the purpose of dynamic is to answer the question what would have happened had the compiler known the runtime type, and casting to an interface doesn't give you the runtime type.
When you move Empty outside then the dynamic runtime analyzer pretends that you wrote:
dynamic num0 = ((Empty)container).Value;
And now Empty is accessible and the cast is legal, so you get the expected result.
UPDATE:
can compile that code into an assembly, reference this assembly and it will work if the Empty type is outside of the class which would make it internal by default
I am unable to reproduce the described behaviour. Let's try a little example:
public class Factory
{
public static Thing Create()
{
return new InternalThing();
}
}
public abstract class Thing {}
internal class InternalThing : Thing
{
public int Value {get; set;}
}
> csc /t:library bar.cs
class P
{
static void Main ()
{
System.Console.WriteLine(((dynamic)(Factory.Create())).Value);
}
}
> csc foo.cs /r:bar.dll
> foo
Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException:
'Thing' does not contain a definition for 'Value'
And you see how this works: the runtime binder has detected that InternalThing is internal to the foreign assembly, and therefore is inaccessible in foo.exe. So it falls back to the public base type, Thing, which is accessible but does not have the necessary property.
I'm unable to reproduce the behaviour you describe, and if you can reproduce it then you've found a bug. If you have a small repro of the bug I am happy to pass it along to my former colleagues.
I guess, at runtime, container method calls are just resolved in the private Empty class, which makes your code fail. As far as I know, dynamic can not be used to access private members (or public members of private class)
This should (of course) work :
var num0 = ((IContainer)container).Value;
Here, it is class Empty which is private : so you can not manipulate Empty instances outside of the declaring class (factory). That's why your code fails.
If Empty were internal, you would be able to manipulate its instances accross the whole assembly, (well, not really because Factory is private) making all dynamic calls allowed, and your code work.

Why class needs access modifiers?

As I understand, with private modifier you can only inherit from a class, but you can not create instances of it.
private class A // error
{
}
class B
{
static int Main()
{
A obj = new A();
return 0;
}
}
Is it the only useful feature of the private classes?
We need access modifiers, because the different types in our assembly have different purposes of use. For instance an assembly will conclude at least one public class, which will be consumed by the user of the assembly. On the other hand, you may need to declare classes that will be used only inside your assembly and in the same namespace. So for this reason, you have to declare them as private. Last but not least there would be types that should be accessible from all the other types in your assembly. Hence, you have to declare them as internal. In a few words, each type has different purpose of existence and subsequently different usage. For that reason they should have also different access modifier.
NOTE By default the access modifier for a class is internal.

C# dynamic type gotcha

I just ran into the strangest thing and I'm a bit mind = blown at the moment...
The following program compiles fine but when you run it you get a RuntimeBinderException when you try to read Value. 'object' does not contain a definition for 'Value'
class Program
{
interface IContainer
{
int Value { get; }
}
class Factory
{
class Empty : IContainer
{
public int Value
{
get { return 0; }
}
}
static IContainer nullObj = new Empty();
public IContainer GetContainer()
{
return nullObj;
}
}
static void Main(string[] args)
{
dynamic factory = new Factory();
dynamic container = factory.GetContainer();
var num0 = container.Value; // WTF!? RuntimeBinderException, really?
}
}
Here's the mind blowing part. Move the nested type Factory+Empty outside of the Factory class, like so:
class Empty : IContainer
{
public int Value
{
get { return 0; }
}
}
class Factory...
And the program runs just fine, anyone care to explain why that is?
EDIT
In my adventure of coding I of course did something I should have thought about first. That's why you see me rambling a bit about the difference between class private and internal. This was because I had set the InternalsVisibleToAttribute which made my test project (which was consuming the bits in this instance) behave the way they did, which was all by design, although alluding me from the start.
Read Eric Lippert's answer for a good explanation of the rest.
What caught me really of guard was that the dynamic binder takes the visibility of the type of the instance in mind. I have a lot of JavaScript experience and as a JavaScript programmer where there really isn't such a thing as public or private, I was completely fooled by the fact that the visibility mattered, I mean after all, I was accessing this member as if it was of the public interface type (I thought dynamic was simply syntactic sugar for reflection) but the dynamic binder cannot make such an assumption unless you give it a hint, using a simple cast.
The fundamental principle of "dynamic" in C# is: at runtime do the type analysis of the expression as though the runtime type had been the compile time type. So let's see what would happen if we actually did that:
dynamic num0 = ((Program.Factory.Empty)container).Value;
That program would fail because Empty is not accessible. dynamic will not allow you to do an analysis that would have been illegal in the first place.
However, the runtime analyzer realizes this and decides to cheat a little. It asks itself "is there a base class of Empty that is accessible?" and the answer is obviously yes. So it decides to fall back to the base class and analyzes:
dynamic num0 = ((System.Object)container).Value;
Which fails because that program would give you an "object doesn't have a member called Value" error. Which is the error you are getting.
The dynamic analysis never says "oh, you must have meant"
dynamic num0 = ((Program.IContainer)container).Value;
because of course if that's what you had meant, that's what you would have written in the first place. Again, the purpose of dynamic is to answer the question what would have happened had the compiler known the runtime type, and casting to an interface doesn't give you the runtime type.
When you move Empty outside then the dynamic runtime analyzer pretends that you wrote:
dynamic num0 = ((Empty)container).Value;
And now Empty is accessible and the cast is legal, so you get the expected result.
UPDATE:
can compile that code into an assembly, reference this assembly and it will work if the Empty type is outside of the class which would make it internal by default
I am unable to reproduce the described behaviour. Let's try a little example:
public class Factory
{
public static Thing Create()
{
return new InternalThing();
}
}
public abstract class Thing {}
internal class InternalThing : Thing
{
public int Value {get; set;}
}
> csc /t:library bar.cs
class P
{
static void Main ()
{
System.Console.WriteLine(((dynamic)(Factory.Create())).Value);
}
}
> csc foo.cs /r:bar.dll
> foo
Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException:
'Thing' does not contain a definition for 'Value'
And you see how this works: the runtime binder has detected that InternalThing is internal to the foreign assembly, and therefore is inaccessible in foo.exe. So it falls back to the public base type, Thing, which is accessible but does not have the necessary property.
I'm unable to reproduce the behaviour you describe, and if you can reproduce it then you've found a bug. If you have a small repro of the bug I am happy to pass it along to my former colleagues.
I guess, at runtime, container method calls are just resolved in the private Empty class, which makes your code fail. As far as I know, dynamic can not be used to access private members (or public members of private class)
This should (of course) work :
var num0 = ((IContainer)container).Value;
Here, it is class Empty which is private : so you can not manipulate Empty instances outside of the declaring class (factory). That's why your code fails.
If Empty were internal, you would be able to manipulate its instances accross the whole assembly, (well, not really because Factory is private) making all dynamic calls allowed, and your code work.

Enum on namespace level - still needs to be public?

I am not sure why the enum there must be public in order to be used with the delegate. I assumed when on namespace level, the whole app can access it, as it is in the scope.
namespace Test
{
enum Days
{
Monday,Tuesday
}
class TestingClass
{
public delegate void DelTest(Days d) /// ERROR, type enum is less accessible
}
}
Your delegate type is actually declared within an internal class, so it's effectively internal too (in some senses, anyway). That's why your example as shown will compile (after adding the semi-colon). To make it break, you'd have to make TestingClass public too. So options:
Leave it as shown
Make the delegate explicitly internal, if you want TestingClass to be public
Make the enum explicitly public, if you want everything to be public
Just to explain why your current code would be broken if TestClass were public: the delegate would be public, and therefore visible outside the current assembly. That means all its parameters and the return type have to be visible too.
Don't forget that the default access level for a member in C# is always "the most restrictive access level that could be explicitly specified for that member" - so for a top-level type (including an enum), the default accessibility is internal.
The accessibility of your enum must match the delegate. Think about how you're going to call it.
new TestingClass.DelTest(Days.Monday).Invoke();
To be able to do this from a different assembly, the Days enum must be public. If you don't want it to be public, change the accessibility of the delegate to match that of the enum, e.g. set both to be internal.
I assumed when on namespace level, the whole app can access it
No, the whole assembly can access it. The default access level is internal.
Edit: When I change your code to use a public class:
enum Days { ... }
public class TestingClass { void M(Days d) {} }
I do get a compile error
Inconsistent accessibility: parameter type 'Test
.Days' is less accessible than ...
And that is what #firefox explains: a parameter-type in a public method must also be public, to avoid inconsistencies. Currently your Days type is less accessible (internal).
This piece of code compiles fine for me too, with the addition of the semi colon.
The error of "parameter type is less accessible than the delegate" would only occur if the class accessibility is raised, as currently they are defined with the same accessibility level, internal.
e.g.
namespace Test
{
enum Days
{
Monday, Tuesday
}
public class TestingClass
{
public delegate void DelTest(Days d); // This will produce an error...
}
}

Co-/contravariance support for non-generic types? [duplicate]

This question already has answers here:
Why does C#/CLR not support method override co/contra-variance?
(5 answers)
Closed 8 years ago.
I wonder why the C# team decided not to support co-/contravariance for non-generics, considering they could be just as safe. The question is rather subjective, as I don't expect a team member to respond, but someone may have the insight I (and Barbara Liskov) lack.
Lets look at this sample interface:
public interface ITest
{
object Property
{
get;
}
}
The following implementation will fail, although completely safe (we could always return a more specific type without violating the interface - not in C#, but at least in theory).
public class Test : ITest
{
public string Property
{
get;
}
}
The code would naturally not be safe, if the interface included a setter, but this is no reason for limiting implementation overall, as this could be pointed out by using out/in to declare safety, just as for generics.
The CLR doesn't support covariant return types, whereas it's supported delegate/interface generic variance from .NET 2.0 onwards.
In other words, it's not really up to the C# team, but the CLR team.
As to why the CLR doesn't support normal variance - I'm not sure, other than adding complexity, presumably without the requisite amount of perceived benefit.
EDIT: To counter the point about return type covariance, from section 8.10.4 of the CLI spec, talking about vtable "slots":
For each new member that is marked
"expect existing slot", look to see if
an exact match on kind (i.e., field or
method), name, and signature exists
and use that slot if it is found,
otherwise allocate a new slot.
From partition II, section 9.9:
Specifically, in order to determine
whether a member hides (for static or
instance members) or overrides (for
virtual methods) a member from a base
class or interface, simply substitute
each generic parameter with its
generic argument, and compare the
resulting member signatures.
There is no indication that the comparison is done in a way which allows for variance.
If you believe the CLR does allow for variance, I think that given the evidence above it's up to you to prove it with some appropriate IL.
EDIT: I've just tried it in IL, and it doesn't work. Compile this code:
using System;
public class Base
{
public virtual object Foo()
{
Console.WriteLine("Base.Foo");
return null;
}
}
public class Derived : Base
{
public override object Foo()
{
Console.WriteLine("Derived.Foo");
return null;
}
}
class Test
{
static void Main()
{
Base b = new Derived();
b.Foo();
}
}
Run it, with output:
Derived.Foo
Disassemble it:
ildasm Test.exe /out:Test.il
Edit Derived.Foo to have a return type of "string" instead of "object":
.method public hidebysig virtual instance string Foo() cil managed
Rebuild:
ilasm /OUTPUT:Test.exe Test.il
Rerun it, with output:
Base.Foo
In other words, Derived.Foo no longer overrides Base.Foo as far as the CLR is concerned.
Returned value of method is always "out", they are always on right side of an assignment expression.
CLR Assembly format has metadata which assigns instance methods to interface methods, but this method inference is only dependent on input parameters, they might need to create new format to support, not only that it also becomes ambigious.
New methods mapping algorithm between instance method and interface signature could be complex and CPU intensive as well. And I belive method signatures of interface methods are resolved at compile time. Because at runtime it may be too expensive.
Method inference/resolution could be problem as explained below.
Consider following sample with allowed dynamic method resolution with different return types only (Which is absolutely wrong)
public class Test{
public int DoMethod(){ return 2; }
public string DoMethod() { return "Name"; }
}
Test t;
int n = t.DoMethod(); // 1st method
string txt = t.DoMethod(); // 2nd method
object x = t.DoMethod(); // DOOMED ... which one??
The CLR does not support variance on method overrides, but there is a workaround for interface implementations:
public class Test : ITest
{
public string Property
{
get;
}
object ITest.Property
{
get
{
return Property;
}
}
}
This will achieve the same effect as the covariant override, but can only be used for interfaces and for direct implementations

Categories

Resources