In Wicket, they have something called a MetaDataKey. These are used to store typed meta information in Wicket components. Since Wicket makes heavy use of serialization, the Wicket designers decided that simple object identity would not be reliable and so made MetaDataKey an abstract class, forcing you to create a subclass for each key and then check to see if the key is an instance of subclass (from the Wicket source code):
public boolean equals(Object obj) {
return obj != null && getClass().isInstance(obj);
}
Thus, to create a key and store something I would do something like this:
private final static MetaDataKey<String> foo = new MetaDataKey<String>() {};
...
component.setMetaData(foo, "bar");
First, why would making a subtype work better than using object identity under serialization?
Second, if I wanted to create a similar facility in C# (which lacks anonymous inner classes), how would I go about it?
The problem with serialization and identity is, that the serialization process actually creates a clone of the original object. Most of the time,
SomeThing x = ...;
ObjectOutputStream oos = ...;
oos.writeObject(x);
followed by
ObjectInputStream ois = ...;
SomeThing y = (SomeThing)ois.readObject()
cannot and does not enforce that x == y (though it should be the case, that x.equals(y))
If you really wanted to go with identity, you'd have to write custom serialization code, which enforces, that reading an instance of your class from the stream yields actually the same (as in singleton) instance that was written. This is hard to get right, and I think, forcing developers to do that simply to declare a magic key would make the API quite hard to use.
Nowadays, one could use enums, and rely on the VM to enforce the singleton character.
enum MyMetaDataKey implements HyptheticalMetaDataKeyInterface {
TITLE(String.class),
WIDTH(Integer.class);
private final Class<?> type;
private MyMetaDataKey(Class<?> t) { type = t; }
public Class<?> getType() { return type; }
}
The disadvantage is, that you cannot declare you enum to inherit from the common base class (you can have it implement interfaces, though), so you would have to manually code the entire support code, which MetaDataKey might provide for you, over and over again. In the example above, all this getType should have been provided by an abstract base class, but it couldn't because we used enum.
As to the second part of the question... Unfortunately, I don't feel proficient enough in C# to answer that (besides the already mentioned use a plain private class solution already given in the comments).
That said... (Edit to answer the questions which appeared in the comments on Ben's answer) One possible solution to achieve something similar (in the sense, that it would be just as usable as the Java solution):
[Serializable]
public class MetaDataKey<T> {
private Guid uniqueId;
private Type type;
public MetaDataKey(Guid key, Type type) {
this.uniqueId;
this.type = type;
}
public override boolean Equals(object other) {
return other is MetaDataKey && uniqueId == ((MetaDataKey)other).uniqueId;
}
}
which may be used as in
class MyStuff {
private static MetaDataKey<String> key = new MetaDataKey<String>(new Guid(), typeof(String));
}
Please ignore any violations of the C#-language. It's too long since I used it.
This may look like a valid solution. The problem, however, lies in initialization of the key constant. If it is done like in the example above, each time, the application is started, a new Guid value is created and used as the identifier for MyStuff's meta-data value. So if, say, you have some serialized data from a previous invocation of the program (say, stored in a file), it will have keys with a different Guid value of MyStuff's meta-data key. An effectively, after deserialization, any request
String myData = magicDeserializedMetaDataMap.Get(MyStuff.key);
will fail -- simply because the Guids differ. So, in order to make the example work, you have to have persistent pre-defined Guids:
class MyStuff {
private static Guid keyId = new Guid("{pre-define-xyz}");
private static MetaDataKey<String> key = new MetaDataKey<String>(keyId, typeof(String));
}
Now, things work as desired, but the burden of maintaining the key Guids has come upon you. This is, I think, what the Java solution tries to avoid with this cute anonymous subclass trick.
As far as the rationale for subtyping vs using reference identity, I'm as much in the dark as you. If I had to guess, I'd say that as references are basically fancy pointers, their value isn't necessarily guaranteed to be retained through serialization/deserialization, and a clever way to simulate a unique object ID is with whatever the Java compiler chooses to call a particular anonymous subclass. I'm not too conversant with the Java spec, so I don't know if this is specified behavior or an implementation detail of the various compilers. I may be entirely off track here.
C# has anonymous types, but they are severely limited - in particular, they cannot inherit from anything other than System.Object and can implement no interfaces. Given that, I don't see how this particular technique can be ported from Java.
One way to go about retaining object uniqueness through serialization could go thusly: The Wicket-style base class could, in theory, retain a private member that is runtime-unique and generated on construction, like a System.Guid or a System.IntPtr that gets the value of the object's handle. This value could be (de)serialized and used as a stand-in for reference equality.
[Serializable]
public class MetaDataKey<T>
{
private Guid id;
public MetaDataKey(...)
{
this.id = Guid.NewGuid();
....
}
public override bool Equals(object obj)
{
var that = obj as MetaDataKey<T>;
return that != null && this.id == that.id;
}
}
EDIT
Here's how to do it by saving the object's actual reference value; this takes a bit less memory, and is slightly more true to the notion of reference-equality.
using System.Runtime.InteropServices;
[Serializable]
public class AltDataKey<T>
{
private long id; // IntPtr is either 32- or 64-bit, so to be safe store as a long.
public AltDataKey(...)
{
var handle = GCHandle.Alloc(this);
var ptr = GCHandle.ToIntPtr(handle);
id = (long)ptr;
handle.Free();
}
// as above
}
Related
For the purposes of this question, a 'constant reference' is a reference to an object from which you cannot call methods that modify the object or modify it's properties.
I want something like this:
Const<User> user = provider.GetUser(); // Gets a constant reference to an "User" object
var name = user.GetName(); // Ok. Doesn't modify the object
user.SetName("New value"); // <- Error. Shouldn't be able to modify the object
Ideally, I would mark with a custom attribute (e.g. [Constant]) every method of a class that doesn't modify the instance, and only those methods can be called from the constant reference. Calls to other methods would result in an error, if possible, during compile time.
The idea is I can return a read-only reference to and be sure that it will not be modified by the client.
The technique you're referring to is called "const-correctness" which is a language feature of C++ and Swift, but not C#, unfortunately - however you're onto something by using a custom attribute because that way you can enforce it via a Roslyn extension - but that's a rabbit-hole.
Alternatively, there's a much simpler solution using interfaces: because C# (and I think the CLR too) does not support const-correctness (the closest we have is the readonly field modifier) the .NET base-class-library designers added "read-only interfaces" to common mutable types to allow a object (wheather mutable or immutable) to expose its functionality via an interface that only exposes immutable operations. Some examples include IReadOnlyList<T>, IReadOnlyCollection<T>, IReadOnlyDictionary<T> - while these are all enumerable types the technique is good for singular objects too.
This design has the advantage of working in any language that supports interfaces but not const-correctness.
For each type (class, struct, etc) in your project that needs to expose data without risk of being changed - or any immutable operations then create an immutable interface.
Modify your consuming code to use these interfaces instead of the concrete type.
Like so:
Supposing we have a mutable class User and a consuming service:
public class User
{
public String UserName { get; set; }
public Byte[] PasswordHash { get; set; }
public Byte[] PasswordSalt { get; set; }
public Boolean ValidatePassword(String inputPassword)
{
Hash[] inputHash = Crypto.GetHash( inputPassword, this.PasswordSalt );
return Crypto.CompareHashes( this.PasswordHash, inputHash );
}
public void ResetSalt()
{
this.PasswordSalt = Crypto.GetRandomBytes( 16 );
}
}
public static void DoReadOnlyStuffWithUser( User user )
{
...
}
public static void WriteStuffToUser( User user )
{
...
}
Then make an immutable interface:
public interface IReadOnlyUser
{
// Note that the interfaces' properties lack setters.
String UserName { get; }
IReadOnlyList<Byte> PasswordHash { get; }
IReadOnlyList<Byte> PasswordSalt { get; }
// ValidatePassword does not mutate state so it's exposed
Boolean ValidatePassword(String inputPassword);
// But ResetSalt is not exposed because it mutates instance state
}
Then modify your User class and consumers:
public class User : IReadOnlyUser
{
// (same as before, except need to expose IReadOnlyList<Byte> versions of array properties:
IReadOnlyList<Byte> IReadOnlyUser.PasswordHash => this.PasswordHash;
IReadOnlyList<Byte> IReadOnlyUser.PasswordSalt => this.PasswordSalt;
}
public static void DoReadOnlyStuffWithUser( IReadOnlyUser user )
{
...
}
// This method still uses `User` instead of `IReadOnlyUser` because it mutates the instance.
public static void WriteStuffToUser( User user )
{
...
}
So, these are the first two ideas I initially had, but don't quite solve the problem.
Using Dynamic Objects:
The first idea I had was creating a Dynamic Object that would intercept all member invokations and throw an error if the method being called isn't marked with a [Constant] custom attribute. This approach is problematic because a) We don't have the support of the compiler to check for errors in the code (i.e. method name typos) when dealing with dynamic objects, which might lead to a lot of runtime errors; and b) I intend to use this a lot, and searching for method names by name every time a method is called might have considerable performance impact.
Using RealProxy:
My second idea was using a RealProxy to wrap the real object and validate the methods being called, but this only works with objects that inherit from MarshalByRefObject.
I have a case where I have several sets of numbers (register values). I want to improve readability and also to check appropriate types (only certain values make sense in certain functions).
In my particular implementation, I made them enums - so I have a set of enums now.
Now I seem to have reached the end on this approach, since I want to divide them into sets of valid enums for certain applications - so function A could for example take (a value from) enumA, enumB and enumC as input, but not enumD which is a description of different functionality.
I already looked into enums in interfaces and enum inheritance - both are dead ends, not possible in C#.
I wonder now how the solution to this problem might look like. I would like to get intellisense on the possible values and also have some type safety, so that I could not (well, at least not without maliciously casting it) feed the wrong values in.
How to achieve this?
(Possible solutions would be to simply write several functions taking several different enums - still possible but not really nice, or something like Is there a name for this pattern? (C# compile-time type-safety with "params" args of different types) - both just seems not too nice.)
One option is to scrap enums and use your own clases designed to mimic enums. It will be a bit more work for you to set them up, but once you do it will be easy enough to use, and will be able to have the functionality you've described.
public class Register
{
private int value;
internal Register(int value)
{
this.value = value;
}
public static readonly Register NonSpecialRegister = new Register(0);
public static readonly Register OtherNonSpecialRegister = new Register(1);
public static readonly SpecialRegister SpecialRegister
= SpecialRegister.SpecialRegister;
public static readonly SpecialRegister OtherSpecialRegister
= SpecialRegister.OtherSpecialRegister;
public override int GetHashCode()
{
return value.GetHashCode();
}
public override bool Equals(object obj)
{
Register other = obj as Register;
if (obj == null)
return false;
return other.value == value;
}
}
public class SpecialRegister : Register
{
internal SpecialRegister(int value) : base(value) { }
public static readonly SpecialRegister SpecialRegister = new SpecialRegister(2);
public static readonly SpecialRegister OtherSpecialRegister = new SpecialRegister(3);
}
Given this, you could have a method like:
public static void Foo(Register reg)
{
}
That could take any register, and could be called like:
Foo(Register.NonSpecialRegister);
Foo(Register.OtherSpecialRegister);
Then you could have another method such as:
public static void Bar(SpecialRegister reg)
{
}
Which wouldn't be able to accept a Register.NonSpecialRegister, but could accept a Register.OtherSpecialRegister or SpecialRegister.SpecialRegister.
Sounds like you have exhausted the capabilities of the static type system on the CLR. You can still get runtime validation by wrapping each integer with a class that validates that the value you try to store in it actually is a member of the static set.
If you have a reliable test suite or are willing to do manual testing this will at least catch the bugs instead of the bugs causing silent data corruption.
If you have multiple "sets" that you want to keep apart you can either use class inheritance or have a set of user-defined conversion operators which validate that the conversion is OK at runtime.
I don't know what specific requirements you have but maybe you can use class-based inheritance to check some properties statically. The base class would be the larger set in that case and derived classes would specialize the set of allowed values.
You have basically two options:
Option 1: Multiple enums
Create multiple enums, one for each application, and replicate the values in each enum. Then you can cast between them. For example:
enum App1
{
Data1 = AppAll.Data1,
Data2 = AppAll.Data2,
Data42 = AppAll.Data42,
}
enum App2
{
Data2 = AppAll.Data2,
Data16 = AppAll.Data16,
Data42 = AppAll.Data42,
}
enum AppAll
{
Data1 = 1,
Data2 = 2,
Data16 = 16,
Data42 = 42,
}
App1 value1 = (App1)AppAll.Data2;
App2 value2 = (App2)value1;
This will give you IntelliSense.
Option 2: Determine which are allowed
Create a method that returns a boolean on which values are allowed (this may be virtual and overridden for each application). Then you can throw an exception when the enum value is wrong.
public bool IsAllowed(AppAll value)
{
return value == AppAll.Data2
|| value == AppAll.Data16
|| value == AppAll.Data42;
}
if (!IsAllowed(value))
throw new ArgumentException("Enum value not allowed.");
This will not give you IntelliSense.
A few notes:
You cannot have inheritance for enums because under the covers enums are represented as structs (i.e. value types).
In C# you can literally cast any value to your enum type, even when it is not a member of it. For example, I can do (App1)1337 even when there is no member with value 1337.
If you want compile type checking, you are better off with distinct enums for distinct cases. If you want to have a master enum with all of your possibilities you can write a test that ensures that all of your "child" enum lists are valid subsets of the master (in terms of Int casts).
As an alternative, I would have to wonder (since no code is provided, I can only wonder) if you might not be better served with objects with methods for each enum option. Then you inherit out objects with the various methods instead of enums. (After all, it seems that you are using Enums as proxies for method signatures).
Ok so lets say I have a structure A like that:
Struct A{
private String _SomeText;
private int _SomeValue;
public A(String someText, int SomeValue) { /*.. set the initial values..*/ }
public String SomeText{ get { return _SomeText; } }
public int SomeValue{ get { return _SomeValue; } }
}
Now what I want to be able to do is to return that Structure A as a result of a method in a Class ABC, like that:
Class ABC{
public A getStructA(){
//creation of Struct A
return a;
}
}
I don't want any programmer using my library (which will have Struct A and Class ABC and some more stuff) to ever be able to create an instance of Struct A.
I want the only way for it to be created is as a return from the getStructA() method. Then the values can be accessed through the appropriate getters.
So is there any way to set a restrictions like that? So a Structure can't be instantiated outside of a certain class? Using C#, .Net4.0.
Thanks for your help.
---EDIT:----
To add some details on why am I trying to achieve this:
My class ABC has some "status" a person can query. This status has 2 string values and then a long list of integers.
There never will be a need to create an object/instance of "Status" by the programmer, the status can only be returned by "getStatus()" function of the class.
I do not want to split these 3 fields to different methods, as to obtain them I am calling Windows API (p/invoke) which returns similar struct with all 3 fields.
If I was indeed going to split it to 3 methods and not use the struct, I would have to either cache results or call the method from Windows API every time one of these 3 methods is called...
So I can either make a public struct and programmers can instantiate it if they want, which will be useless for them as there will be no methods which can accept it as a parameter. Or I can construct the library in such a way that this struct (or change it to a class if it makes things easier) can be obtained only as a return from the method.
If the "restricted" type is a struct, then no, there is no way to do that. The struct must be at least as public as the factory method, and if the struct is public then it can be constructed with its default constructor. However, you can do this:
public struct A
{
private string s;
private int i;
internal bool valid;
internal A(string s, int i)
{
this.s = s;
this.i = i;
this.valid = true;
}
...
and now you can have your library code check the "valid" flag. Instances of A can only be made either (1) by a method internal to your library that can call the internal constructor, or (2) by the default constructor. You can tell them apart with the valid flag.
A number of people have suggested using an interface, but that's a bit pointless; the whole point of using a struct is to get value type semantics and then you go boxing it into an interface. You might as well make it a class in the first place. If it is going to be a class then it is certainly possible to make a factory method; just make all the ctors of the class internal.
And of course I hope it goes without saying that none of this gear should be used to implement code that is resistant to attack by a fully-trusted user. Remember, this system is in place to protect good users from bad code, not good code from bad users. There is nothing whatsoever that stops fully trusted user code from calling whatever private methods they want in your library via reflection, or for that matter, altering the bits inside a struct with unsafe code.
Create a public interface and make the class private to the class invoking it.
public ISpecialReturnType
{
String SomeText{ get; }
int SomeValue{ get; }
}
class ABC{
public ISpecialReturnType getStructA(){
A a = //Get a value for a;
return a;
}
private struct A : ISpecialReturnType
{
private String _SomeText;
private int _SomeValue;
public A(String someText, int SomeValue) { /*.. set the initial values..*/ }
public String SomeText{ get { return _SomeText; } }
public int SomeValue{ get { return _SomeValue; } }
}
}
What exactly are you concerned about? A structure is fundamentally a collection of fields stuck together with duct tape. Since struct assignment copies all of the fields from one struct instance to another, outside the control of the struct type in question, structs have a very limited ability to enforce any sort of invariants, especially in multi-threaded code (unless a struct is exactly 1, 2, or 4 bytes, code that wants to create an instance which contains a mix of data copied from two different instances may do so pretty easily, and there's no way the struct can prevent it).
If you want to ensure that your methods will not accept any instances of a type other than those which your type has produced internally, you should use a class that either has only internal or private constructors. If you do that, you can be certain that you're getting the instances that you yourself produced.
EDIT
Based upon the revisions, I don't think the requested type of restriction is necessary or particularly helpful. It sounds like what's fundamentally desired to stick a bunch of values together and store them into a stuck-together group of variables held by the caller. If you declare a struct as simply:
public struct QueryResult {
public ExecutionDuration as Timespan;
public CompletionTime as DateTime;
public ReturnedMessage as String;
}
then a declaration:
QueryResult foo;
will effectively create three variables, named foo.ExecutionDuration, foo.CompletionTime, and foo.ReturnedMessage. The statement:
foo = queryPerformer.performQuery(...);
will set the values of those three variables according to the results of the function--essentially equivalent to:
{
var temp = queryPerformer.performQuery(...);
foo.ExecutionDuration = temp.ExecutionDuration
foo.CompletionTime = temp.CompletionTime;
foo.ReturnedMessage = temp.ReturnedMessage;
}
Nothing will prevent user code from doing whatever it wants with those three variables, but so what? If user code decides for whatever reason to say foo.ReturnedMessage = "George"; then foo.ReturnedMessage will equal George. The situation is really no different from if code had said:
int functionResult = doSomething();
and then later said functionResult = 43;. The behavior of functionResult, like any other variable, is to hold the last thing written to it. If the last thing written to it is the result of the last call to doSomething(), that's what it will hold. If the last thing written was something else, it will hold something else.
Note that a struct field, unlike a class field or a struct property, can only be changed either by writing to it, or by using a struct assignment statement to write all of the fields in one struct instance with the values in corresponding fields of another. From the consumer's perspective, a read-only struct property carries no such guarantee. A struct may happen to implement a property to behave that way, but without inspecting the code of the property there's no way to know whether the value it returns might be affected by some mutable object.
Does anyone know why covariant return types are not supported in C#? Even when attempting to use an interface, the compiler complains that it is not allowed. See the following example.
class Order
{
private Guid? _id;
private String _productName;
private double _price;
protected Order(Guid? id, String productName, double price)
{
_id = id;
_productName = productName;
_price = price;
}
protected class Builder : IBuilder<Order>
{
public Guid? Id { get; set; }
public String ProductName { get; set; }
public double Price { get; set; }
public virtual Order Build()
{
if(Id == null || ProductName == null || Price == null)
throw new InvalidOperationException("Missing required data!");
return new Order(Id, ProductName, Price);
}
}
}
class PastryOrder : Order
{
PastryOrder(Guid? id, String productName, double price, PastryType pastryType) : base(id, productName, price)
{
}
class PastryBuilder : Builder
{
public PastryType PastryType {get; set;}
public override PastryOrder Build()
{
if(PastryType == null) throw new InvalidOperationException("Missing data!");
return new PastryOrder(Id, ProductName, Price, PastryType);
}
}
}
interface IBuilder<in T>
{
T Build();
}
public enum PastryType
{
Cake,
Donut,
Cookie
}
Thanks for any responses.
UPDATE: This answer was written in 2011. After two decades of people proposing return type covariance for C#, it looks like it will finally be implemented; I am rather surprised. See the bottom of https://devblogs.microsoft.com/dotnet/welcome-to-c-9-0/ for the announcement; I'm sure details will follow.
First off, return type contravariance doesn't make any sense; I think you are talking about return type covariance.
See this question for details:
Does C# support return type covariance?
You want to know why the feature is not implemented. phoog is correct; the feature is not implemented because no one here ever implemented it. A necessary but insufficient requirement is that the feature's benefits exceed its costs.
The costs are considerable. The feature is not supported natively by the runtime, it works directly against our goal to make C# versionable because it introduces yet another form of the brittle base class problem, Anders doesn't think it is an interesting or useful feature, and if you really want it, you can make it work by writing little helper methods. (Which is exactly what the CIL version of C++ does.)
The benefits are small.
High cost, small benefit features with an easy workaround get triaged away very quickly. We have far higher priorities.
The contravariant generic parameter cannot be output, because that cannot be guaranteed to be safe at compile time, and C# designers made a decision not to prolong the necessary checks to the run-time.
This is the short answer, and here is a slightly longer one...
What is variance?
Variance is a property of a transformation applied to a type hierarchy:
If the result of the transformation is a type hierarchy that keeps the "direction" of the original type hierarchy, the transformation is co-variant.
If the result of the transformation is a type hierarchy that reverses the original "direction", the transformation is contra-variant.
If the result of the transformation is a bunch of unrelated types, the transformation is in-variant.
What is variance in C#?
In C#, the "transformation" is "being used as a generic parameter". For example, let's say a class Parent is inherited by class Child. Let's denote that fact as: Parent > Child (because all Child instances are also Parent instances, but not necessarily the other way around, hence Parent is "bigger"). Let's also say we have a generic interface I<T>:
If I<Parent> > I<Child>, the T is covariant (the original "direction" between Parent and Child is kept).
If I<Parent> < I<Child>, the T is contravariant (the original "direction" is reversed).
If I<Parent> is unrelated to I<Child>, the T is invariant.
So, what is potentially unsafe?
If C# compiler actually agreed to compile the following code...
class Parent {
}
class Child : Parent {
}
interface I<in T> {
T Get(); // Imagine this actually compiles.
}
class G<T> : I<T> where T : new() {
public T Get() {
return new T();
}
}
// ...
I<Child> g = new G<Parent>(); // OK since T is declared as contravariant, thus "reversing" the type hierarchy, as explained above.
Child child = g.Get(); // Yuck!
...this would lead to a problem at run-time: a Parent is instantiated and assigned to a reference to Child. Since Parent is not Child, this is wrong!
The last line looks OK at compile-time since I<Child>.Get is declared to return Child, yet we could not fully "trust" it at run-time. C# designers decided to do the right thing and catch the problem completely at compile-time, and avoid any need for the run-time checks (unlike for arrays).
(For similar but "reverse" reasons, covariant generic parameter cannot be used as input.)
Eric Lippert has written a few posts on this site about return method covariance on method overrides, without as far as I can see addressing why the feature is unsupported. He has mentioned, though, that there are no plans to support it: https://stackoverflow.com/a/4349584/385844
Eric is also fond of saying that the answer to "why isn't X supported" is always the same: because nobody has designed, implemented, and tested (etc.) X. An example of that is here: https://stackoverflow.com/a/1995706/385844
There may be some philosophical reason for the lack of this feature; perhaps Eric will see this question and enlighten us.
EDIT
As Pratik pointed out in a comment:
interface IBuilder<in T>
{
T Build();
}
should be
interface IBuilder<out T>
{
T Build();
}
That would allow you to implement PastryOrder : IBuilder<PastryOrder>, and you could then have
IBuilder<Order> builder = new PastryOrder();
There are probably two or three approaches you could use to solve your problem, but, as you note, return method covariance is not one of those approaches, and none of this information answers the question of why C# doesn't support it.
Just to post this somewhere google finds it...
I was looking into this because I wanted to have an interface in which I can return collections / enumerables of arbitrary classes implementing a specific interface.
If you're fine with defining the concrete types you want to return, you can simply define your interface accordingly. It will then check at compile time that the constraints (subtype of whatever) are met.
I provided an example, that might help you.
As Branko Dimitrijevic pointed out, usually it is unsafe to allow covariant return types in general. But using this, it's type-safe and you can even nest this (e. g. interface A<T, U> where T: B<U> where U : C)
(Disclaimer: I started using C# yesterday, so I might be completely wrong regarding best practices, someone with more experience should please comment on this :) )
Example:
Using
interface IProvider<T, Coll> where T : ProvidedData where Coll : IEnumerable<T>
{
Coll GetData();
}
class XProvider : IProvider<X, List<X>>
{
List<X> GetData() { ... }
}
calling
new XProvider().GetData
works and in this case is safe. You only have to define the types you want to return in this case.
More on this: http://msdn.microsoft.com/de-de/library/d5x73970.aspx
I'm using a 3rd party's set of webservices, and I've hit a small snag. Before I manually make a method copying each property from the source to the destination, I thought I'd ask here for a better solution.
I've got 2 objects, one of type Customer.CustomerParty and one of type Appointment.CustomerParty. The CustomerParty objects are actually property and sub-oject exactly the same. But I can't cast from 1 to the other.
So, I need to find a certain person from the webservice. I can do that by calling Customer.FindCustomer(customerID) and it returns a Customer.CustomerParty object.
I need to take that person that I found and then use them a few lines down in a "CreateAppointment" request. Appointment.CreateAppointment takes an appointment object, and the appointment object contains a CustomerParty object.
However, the CustomerParty object it wants is really Appointment.CustomerParty. I've got a Customer.CustomerParty.
See what I mean? Any suggestions?
Why don't you use AutoMapper? Then you can do:
TheirCustomerPartyClass source = WebService.ItsPartyTime();
YourCustomerPartyClass converted =
Mapper.Map<TheirCustomerPartyClass, YourCustomerPartyClass>(source);
TheirCustomerPartyClass original =
Mapper.Map<YourCustomerPartyClass, TheirCustomerPartyClass>(converted);
As long as the properties are identical, you can create a really simple map like this:
Mapper.CreateMap<TheirCustomerPartyClass, YourCustomerPartyClass>();
Mapper.CreateMap<YourCustomerPartyClass, TheirCustomerPartyClass>();
This scenario is common when writing domain patterns. You essentially need to write a domain translator between the two objects. You can do this several ways, but I recommend having an overridden constructor (or a static method) in the target type that takes the service type and performs the mapping. Since they are two CLR types, you cannot directly cast from one to the other. You need to copy member-by-member.
public class ClientType
{
public string FieldOne { get; set; }
public string FieldTwo { get; set; }
public ClientType()
{
}
public ClientType( ServiceType serviceType )
{
this.FieldOne = serviceType.FieldOne;
this.FieldTwo = serviceType.FieldTwo;
}
}
Or
public static class DomainTranslator
{
public static ServiceType Translate( ClientType type )
{
return new ServiceType { FieldOne = type.FieldOne, FieldTwo = type.FieldTwo };
}
}
I'm using a 3rd party's set of
webservices...
Assuming you can't modify the classes, I'm not aware of any way you can change the casting behavior. At least, no way that isn't far, far more complicated than just writing a CustomerToAppointmentPartyTranslator() mapping function... :)
Assuming you're on a recent version of C# (3.5, I believe?), this might be a good candidate for an extension method.
Have you looked at adding a conversion operator to one of the domain classes to define an explicit cast. See the msdn documentation here.
Enjoy!
A simple and very fast way of mapping the types is using the PropertyCopy<TTarget>.CopyFrom<TSource>(TSource source)
method from the MiscUtil library as described here:
using MiscUtil.Reflection;
class A
{
public int Foo { get; set; }
}
class B
{
public int Foo { get; set; }
}
class Program
{
static void Main()
{
A a = new A();
a.Foo = 17;
B b = PropertyCopy<B>.CopyFrom(a);
bool success = b.Foo == 17; // success is true;
}
}
Two classes with exactly the same signature, in two different namespaces, are two different classes. You will not be able to implicitly convert between them if they do not explicitly state how they can be converted from one to the other using implicit or explicit operators.
There are some things you may be able to do with serialization. WCF DataContract classes on one side do not have to be the exact same type as the DataContract on the other side; they just have to have the same signature and be decorated identically. If this is true for your two objects, you can use a DataContractSerializer to "convert" the types through their DataContract decoration.
If you have control over the implementation of one class or the other, you can also define an implicit or explicit operator that will define how the other class can be converted to yours. This will probably simply return a new reference of a deep copy of the other object in your type. Because this is the case, I would define it as explicit, to make sure the conversion is only performed when you NEED it (it will be used in cases when you explicitly cast, such as myAppCustomer = (Appointment.CustomerParty)myCustCustomer;).
Even if you don't control either class, you can write an extension method, or a third class, that will perform this conversion.