Imagine the following class used as a parameters object for some complex request.
public class RequestParameters
{
///<summary>Detailed User-Friendly Descriptions.</summary>
string Param1 { get; set; } = "Default";
int? Param2 { get; set; }
bool Param3 { get; set; } = true;
...
string ParamN { get; set; }
}
I want users to be able to create and configure this parameters object as they see fit before handing it off to be used in a request:
RequestParameters _filtered = new RequestParameters();
filtered.Param1 = "yellow";
filtered.Param3 = true;
_someInstance.InvokeRequest(_filtered);
Once I am processing the request, I wish to cache the result based on the request parameters supplied. It would be trivial to implement some clone method and IEquatable<RequestParameters>, and override Equals() and GetHashCode() on the above RequestParameters class, but the latter is widely regarded as bad practice - RequestParameters is mutable, and if an instance were modified after insertion, it could mess with my cache.
On the other hand, I feel like it would be a tremendous amount of code duplication (and a maintenance headache) if I were to implement some sort of ReadOnlyRequestParameters copy of this class, just without the setters.
Would it really be so bad to use this class as a cache key if I'm careful to make copies of the instances users send in, and never modify them once inserted into the cache? What alternatives would you propose if you were a developer working with the same objects?
In the spirit of MattE's suggestion, one potential solution (which I may end up using) is a compromise of the AsReadOnly() solution (which requires duplicating the class structure and a lot of copy/pasting).
If the primary goal of this class is user-friendly parameterization, and we're okay with its use as a cache key being a second-class citizen, a relatively simple way to implement it is by creating a GetHashKey() method. All this method needs to do is return an immutable object on which we can invoke "Equals" and "GetHashCode" - and guarantee that equivalent instances of this object will be treated equal. Tuples do this, but so do AnonymousTypes, which are leaner and easier to create for more than 6 parameters are involved:
/// <summary>Get a hashable / comparable key representing the current parameters.</summary>
/// <returns>The hash key.</returns>
public virtual object GetHashKey()
{
return new { Param1, Param2, Param3, Param4, ..., ParamN };
}
The returned AnonymousType correctly implements
GetHashCode() as a combined hash of each value in the object.
Equals() as a member-wise comparison of each value in the object.
Using this, we essentially produce a snapshot (copy) of the RequestParameters object in its current state. We sacrifice the ability to easily reflect on the properties, but make it easy and safe to use them as an Equatable/Hashable cache key without requiring any additional classes.
In my specific case, avoiding additional classes is particularly important as I don't just have 1 parameters class, I have dozens. What's more, many RequestParameter classes inherit from each other (as many requests share the same basic parameters). Creating a whole set of "ReadOnly" versions of these classes would be daunting. In this solution, each just has to override GetHashKey() to pair its base classes AnonymousType with its own additional parameters:
public class AdditionalParameters : RequestParameters
{
///<summary>Detailed User-Friendly Descriptions.</summary>
int? Param42 { get; set; };
...
string Param70 { get; set; }
public override object GetHashKey()
{
return new { base.GetHashKey(), Param42, ..., Param70 };
}
}
You could always store the parameters in a Tuple once you have gathered them as that is immutable once created....
Related
For the purposes of this question, a 'constant reference' is a reference to an object from which you cannot call methods that modify the object or modify it's properties.
I want something like this:
Const<User> user = provider.GetUser(); // Gets a constant reference to an "User" object
var name = user.GetName(); // Ok. Doesn't modify the object
user.SetName("New value"); // <- Error. Shouldn't be able to modify the object
Ideally, I would mark with a custom attribute (e.g. [Constant]) every method of a class that doesn't modify the instance, and only those methods can be called from the constant reference. Calls to other methods would result in an error, if possible, during compile time.
The idea is I can return a read-only reference to and be sure that it will not be modified by the client.
The technique you're referring to is called "const-correctness" which is a language feature of C++ and Swift, but not C#, unfortunately - however you're onto something by using a custom attribute because that way you can enforce it via a Roslyn extension - but that's a rabbit-hole.
Alternatively, there's a much simpler solution using interfaces: because C# (and I think the CLR too) does not support const-correctness (the closest we have is the readonly field modifier) the .NET base-class-library designers added "read-only interfaces" to common mutable types to allow a object (wheather mutable or immutable) to expose its functionality via an interface that only exposes immutable operations. Some examples include IReadOnlyList<T>, IReadOnlyCollection<T>, IReadOnlyDictionary<T> - while these are all enumerable types the technique is good for singular objects too.
This design has the advantage of working in any language that supports interfaces but not const-correctness.
For each type (class, struct, etc) in your project that needs to expose data without risk of being changed - or any immutable operations then create an immutable interface.
Modify your consuming code to use these interfaces instead of the concrete type.
Like so:
Supposing we have a mutable class User and a consuming service:
public class User
{
public String UserName { get; set; }
public Byte[] PasswordHash { get; set; }
public Byte[] PasswordSalt { get; set; }
public Boolean ValidatePassword(String inputPassword)
{
Hash[] inputHash = Crypto.GetHash( inputPassword, this.PasswordSalt );
return Crypto.CompareHashes( this.PasswordHash, inputHash );
}
public void ResetSalt()
{
this.PasswordSalt = Crypto.GetRandomBytes( 16 );
}
}
public static void DoReadOnlyStuffWithUser( User user )
{
...
}
public static void WriteStuffToUser( User user )
{
...
}
Then make an immutable interface:
public interface IReadOnlyUser
{
// Note that the interfaces' properties lack setters.
String UserName { get; }
IReadOnlyList<Byte> PasswordHash { get; }
IReadOnlyList<Byte> PasswordSalt { get; }
// ValidatePassword does not mutate state so it's exposed
Boolean ValidatePassword(String inputPassword);
// But ResetSalt is not exposed because it mutates instance state
}
Then modify your User class and consumers:
public class User : IReadOnlyUser
{
// (same as before, except need to expose IReadOnlyList<Byte> versions of array properties:
IReadOnlyList<Byte> IReadOnlyUser.PasswordHash => this.PasswordHash;
IReadOnlyList<Byte> IReadOnlyUser.PasswordSalt => this.PasswordSalt;
}
public static void DoReadOnlyStuffWithUser( IReadOnlyUser user )
{
...
}
// This method still uses `User` instead of `IReadOnlyUser` because it mutates the instance.
public static void WriteStuffToUser( User user )
{
...
}
So, these are the first two ideas I initially had, but don't quite solve the problem.
Using Dynamic Objects:
The first idea I had was creating a Dynamic Object that would intercept all member invokations and throw an error if the method being called isn't marked with a [Constant] custom attribute. This approach is problematic because a) We don't have the support of the compiler to check for errors in the code (i.e. method name typos) when dealing with dynamic objects, which might lead to a lot of runtime errors; and b) I intend to use this a lot, and searching for method names by name every time a method is called might have considerable performance impact.
Using RealProxy:
My second idea was using a RealProxy to wrap the real object and validate the methods being called, but this only works with objects that inherit from MarshalByRefObject.
I have a legacy C# library (a set of interrelated algorithms) in which there is a global god object which is passed to all classes. This god object (simply called Manager :D ) has a Parameters member, and an ObjectCollection member (among lots of others).
public class Manager
{
public Parameters {get; private set;}
public ObjectCollection {get; private set;}
...
...
}
I am unable to test the algorithms because everything takes the manager as dependency, and initializing that means I have to initialize everything. So I want to refactor this design.
Parameters has more than 100 fields in it, the values control the different algorithms. The ObjectCollection has the entities required for the overall execution of the engine, stored by Id, by Name, etc.
The following are the approaches I've though of, but not satisfied with:
Pass Parameters and ObjectCollection (or IParameters and IObjectCollection) instead of the Manager, but I don't think this solves any issue. I wouldn't know which of the parameters the algorithms would depend on.
Splitting the parameters class to smaller ones also is difficult as one parameter may affect many algorithms, so a logical separation is difficult. Plus the dependencies for each algorithm may end up to be many.
A singleton pattern like is usually done for a Logger, but that too is not testable.
Some of the parameters control the algorithm logic, some of the parameters are just required for the algorithm. I'm thinking of making each algorithm a separate class implementing an interface, and at the application start, deciding which algorithm to instantiate based on the parameter. I might end up splitting the current set of algorithm classes to many more, and I'm afraid I'll end up complicating it more and losing the structure of the algorithms.
Is there any standard way to deal with this, or is just splitting big classes to smaller ones and passing dependencies by constructor the only general advice?
In order to allow yourself to make small steps I'd start with a single algorithm and identify the parameters it requires. These can then be exposed in an interface so...
public interface IAmTheParametersForAlgorithm1
{
int OneThing {get;}
int AnotherThing {get;}
}
Then you can alter Manager so that it implements that interface and as in #marcel's answer expose those parameters directly on Manager.
Now you can test Algorithm1 with a very small mock or self-shunt because you don't need to initialise a gigantic Manager in order to run your test. And Algorithm1 no longer knows it takes a Manager object.
public Manager : IAmTheParametersForAlgorithm1 {}
public class Algorithm1
{
public Algorithm1(IAmTheParametersForAlgorithm1 parameters){}
}
Bit by bit you can continue expanding this to each of the sets of parameters and dealing with small, specific interfaces will allow you to identify where different algorithms have common parameters.
public Manager :
IAmTheParametersForAlgorithm1,
IAmTheParametersForAlgorithm2,
IAmTheParametersForAlgorithm3,
IAmTheParametersForAlgorithm4 {}
It also means that as you identify algorithms whose parameters are no longer accessed outside of their interface you can stop injecting Manager into those algorithms, take the parameters out of Manager, and create a new class which only provides those parameters.
This means you can keep your application running the whole time you're making this change if you aren't able to dedicate time to make one gigantic breaking change
For the Parameters, I would go with something like this:
public class Parameters
{
public int MyProperty1 { get; set; }
public int MyProperty2 { get; set; }
public int MyProperty3 { get; set; }
}
public class AlgorithmParameters1
{
private Parameters parameters;
public int MyProperty1 { get { return parameters.MyProperty1; } }
public int MyProperty3 { get { return parameters.MyProperty3; } }
public AlgorithmParameters1(Parameters parameters)
{
this.parameters = parameters;
}
}
public class Algorithm1
{
public void Run(AlgorithmParameters1 parameters)
{
//Access only MyProperty1 and MyProperty3...
}
}
Usage would look like:
var parameters = new Parameters()
{
MyProperty1 = 4,
MyProperty2 = 5,
MyProperty3 = 6,
};
new Algorithm1().Run(new AlgorithmParameters1(parameters));
By the way, I don't see how you could differ between parameters that control an algorithm and are required for it. By control do you mean they are used to make a decision which algorithm to take?
I have a situation where I need to determine whether or not an inherited class is a specific inherited class, but the type expected in the model is the base class, and it is stored using nHibernate/Fluent nHibernate using a Table Per Concrete Class Hierarchy. So my structure looks a bit like this..
class Mutation : Entity {
virtual Aspect Aspect { get; set; }
virtual Measurement Measurement { get; set; }
}
abstract class Measurement : Entity {
// does nothing on its own, really.
}
class Numeric : Measurement {
virtual int Value { get; set; }
// may have some other properties
}
class Observable : Measurement {
virtual Aspect Aspect { get; set; }
}
So basically, what is going on here is this. Mutation expects to be pointed to a type of data, and a Measured change (there could be other ways to change the data, not just flat numerics). So then I would have an object that merely expects an IList<Mutation> and map each subsequent type of Measurement to its own specific table, sharing the Identity with the base Measurement class. That works fine so far.
Now an Observable is different in that it does not store its own value, but rather it points again to another Aspect that may have its own set of mutations and changes. The idea is that the value will always be retrieved from the intended source, and not saved as a flat value in the database. (There is a reason for wanting this behavior that is beyond the scope of this question)
So then, my thought was basically to put in an evaluation like this ..
foreach(var measurement in list) {
if(measurement is Observable){
// then we know to lookup the other value
}
}
That didn't work. I still get the proxied result of just MeasurementProxy. But the same code works fine in a standalone C# application without the use of nHibernate, so I feel a great deal of confidence that the issue is with the proxy.
I then added the following method to my base Entity class...
/// <summary>
/// Unwrap the type from whatever proxy it may be
/// behind, returning the actual .NET <typeparamref name="System.Type"/>.
/// </summary>
/// <returns>
/// A pure <typeparamref name="System.Type"/> that is no longer proxied.
/// </returns>
public virtual Type Unwrap() {
return GetType();
}
Now if I do a Console.WriteLine(measurement.Unwrap()); I get the right type, but the same evaluation ...
foreach(var measurement in list) {
if(measurement.Unwrap() is Observable){
// then we know to lookup the other value
}
}
still does not function. It never runs. Can anyone help me out here?
That's because Unwrap() returns a Type, so measurement.Unwrap() is Observable will always be false, and measurement.Unwrap() is Type would always be true.
Use the typeof operator and reference equality instead:
if (measurement.Unwrap() == typeof(Observable)) {
// Then we know to lookup the other value.
}
The check in Hamidi answer is not enough. As soon as you add some lazy properties into your mapping, for example:
<property name="Description" type="StringClob" not-null="false" lazy="true"/>
the Unwrap method will fail, because for the types that employ lazy properties, the objects are always proxy, and the check
if (measurement.Unwrap() == typeof(Observable)) {
// Then we know to lookup the other value.
}
will fail because Unwrap will return the type of Proxy, not the expected type.
I use following methods for checking entity types:
public virtual Type GetEntityType()
{
var type = GetType();
// Hack to avoid problem with some types that always be proxy.
// Need re-evaluate when upgrade to NH 3.3.3
return type.Name.EndsWith("Proxy") ? type.BaseType : type;
}
public virtual bool IsOfType<T>()
{
return typeof(T).IsAssignableFrom(GetEntityType());
}
and the check becomes:
if (measurement.IsOfType<Observable>()) {
// Then we know to lookup the other value.
}
As you see in the code comment, this is a hack for NH 3.1 and Castle Proxy: Castle Dynamic Proxy types always end with Proxy, so I exploited this signature to detect if the object is proxy or not. My project is still stuck with NH3.1 so I'm not sure what changes the method will need with NH3.3.
In Wicket, they have something called a MetaDataKey. These are used to store typed meta information in Wicket components. Since Wicket makes heavy use of serialization, the Wicket designers decided that simple object identity would not be reliable and so made MetaDataKey an abstract class, forcing you to create a subclass for each key and then check to see if the key is an instance of subclass (from the Wicket source code):
public boolean equals(Object obj) {
return obj != null && getClass().isInstance(obj);
}
Thus, to create a key and store something I would do something like this:
private final static MetaDataKey<String> foo = new MetaDataKey<String>() {};
...
component.setMetaData(foo, "bar");
First, why would making a subtype work better than using object identity under serialization?
Second, if I wanted to create a similar facility in C# (which lacks anonymous inner classes), how would I go about it?
The problem with serialization and identity is, that the serialization process actually creates a clone of the original object. Most of the time,
SomeThing x = ...;
ObjectOutputStream oos = ...;
oos.writeObject(x);
followed by
ObjectInputStream ois = ...;
SomeThing y = (SomeThing)ois.readObject()
cannot and does not enforce that x == y (though it should be the case, that x.equals(y))
If you really wanted to go with identity, you'd have to write custom serialization code, which enforces, that reading an instance of your class from the stream yields actually the same (as in singleton) instance that was written. This is hard to get right, and I think, forcing developers to do that simply to declare a magic key would make the API quite hard to use.
Nowadays, one could use enums, and rely on the VM to enforce the singleton character.
enum MyMetaDataKey implements HyptheticalMetaDataKeyInterface {
TITLE(String.class),
WIDTH(Integer.class);
private final Class<?> type;
private MyMetaDataKey(Class<?> t) { type = t; }
public Class<?> getType() { return type; }
}
The disadvantage is, that you cannot declare you enum to inherit from the common base class (you can have it implement interfaces, though), so you would have to manually code the entire support code, which MetaDataKey might provide for you, over and over again. In the example above, all this getType should have been provided by an abstract base class, but it couldn't because we used enum.
As to the second part of the question... Unfortunately, I don't feel proficient enough in C# to answer that (besides the already mentioned use a plain private class solution already given in the comments).
That said... (Edit to answer the questions which appeared in the comments on Ben's answer) One possible solution to achieve something similar (in the sense, that it would be just as usable as the Java solution):
[Serializable]
public class MetaDataKey<T> {
private Guid uniqueId;
private Type type;
public MetaDataKey(Guid key, Type type) {
this.uniqueId;
this.type = type;
}
public override boolean Equals(object other) {
return other is MetaDataKey && uniqueId == ((MetaDataKey)other).uniqueId;
}
}
which may be used as in
class MyStuff {
private static MetaDataKey<String> key = new MetaDataKey<String>(new Guid(), typeof(String));
}
Please ignore any violations of the C#-language. It's too long since I used it.
This may look like a valid solution. The problem, however, lies in initialization of the key constant. If it is done like in the example above, each time, the application is started, a new Guid value is created and used as the identifier for MyStuff's meta-data value. So if, say, you have some serialized data from a previous invocation of the program (say, stored in a file), it will have keys with a different Guid value of MyStuff's meta-data key. An effectively, after deserialization, any request
String myData = magicDeserializedMetaDataMap.Get(MyStuff.key);
will fail -- simply because the Guids differ. So, in order to make the example work, you have to have persistent pre-defined Guids:
class MyStuff {
private static Guid keyId = new Guid("{pre-define-xyz}");
private static MetaDataKey<String> key = new MetaDataKey<String>(keyId, typeof(String));
}
Now, things work as desired, but the burden of maintaining the key Guids has come upon you. This is, I think, what the Java solution tries to avoid with this cute anonymous subclass trick.
As far as the rationale for subtyping vs using reference identity, I'm as much in the dark as you. If I had to guess, I'd say that as references are basically fancy pointers, their value isn't necessarily guaranteed to be retained through serialization/deserialization, and a clever way to simulate a unique object ID is with whatever the Java compiler chooses to call a particular anonymous subclass. I'm not too conversant with the Java spec, so I don't know if this is specified behavior or an implementation detail of the various compilers. I may be entirely off track here.
C# has anonymous types, but they are severely limited - in particular, they cannot inherit from anything other than System.Object and can implement no interfaces. Given that, I don't see how this particular technique can be ported from Java.
One way to go about retaining object uniqueness through serialization could go thusly: The Wicket-style base class could, in theory, retain a private member that is runtime-unique and generated on construction, like a System.Guid or a System.IntPtr that gets the value of the object's handle. This value could be (de)serialized and used as a stand-in for reference equality.
[Serializable]
public class MetaDataKey<T>
{
private Guid id;
public MetaDataKey(...)
{
this.id = Guid.NewGuid();
....
}
public override bool Equals(object obj)
{
var that = obj as MetaDataKey<T>;
return that != null && this.id == that.id;
}
}
EDIT
Here's how to do it by saving the object's actual reference value; this takes a bit less memory, and is slightly more true to the notion of reference-equality.
using System.Runtime.InteropServices;
[Serializable]
public class AltDataKey<T>
{
private long id; // IntPtr is either 32- or 64-bit, so to be safe store as a long.
public AltDataKey(...)
{
var handle = GCHandle.Alloc(this);
var ptr = GCHandle.ToIntPtr(handle);
id = (long)ptr;
handle.Free();
}
// as above
}
I'm using a 3rd party's set of webservices, and I've hit a small snag. Before I manually make a method copying each property from the source to the destination, I thought I'd ask here for a better solution.
I've got 2 objects, one of type Customer.CustomerParty and one of type Appointment.CustomerParty. The CustomerParty objects are actually property and sub-oject exactly the same. But I can't cast from 1 to the other.
So, I need to find a certain person from the webservice. I can do that by calling Customer.FindCustomer(customerID) and it returns a Customer.CustomerParty object.
I need to take that person that I found and then use them a few lines down in a "CreateAppointment" request. Appointment.CreateAppointment takes an appointment object, and the appointment object contains a CustomerParty object.
However, the CustomerParty object it wants is really Appointment.CustomerParty. I've got a Customer.CustomerParty.
See what I mean? Any suggestions?
Why don't you use AutoMapper? Then you can do:
TheirCustomerPartyClass source = WebService.ItsPartyTime();
YourCustomerPartyClass converted =
Mapper.Map<TheirCustomerPartyClass, YourCustomerPartyClass>(source);
TheirCustomerPartyClass original =
Mapper.Map<YourCustomerPartyClass, TheirCustomerPartyClass>(converted);
As long as the properties are identical, you can create a really simple map like this:
Mapper.CreateMap<TheirCustomerPartyClass, YourCustomerPartyClass>();
Mapper.CreateMap<YourCustomerPartyClass, TheirCustomerPartyClass>();
This scenario is common when writing domain patterns. You essentially need to write a domain translator between the two objects. You can do this several ways, but I recommend having an overridden constructor (or a static method) in the target type that takes the service type and performs the mapping. Since they are two CLR types, you cannot directly cast from one to the other. You need to copy member-by-member.
public class ClientType
{
public string FieldOne { get; set; }
public string FieldTwo { get; set; }
public ClientType()
{
}
public ClientType( ServiceType serviceType )
{
this.FieldOne = serviceType.FieldOne;
this.FieldTwo = serviceType.FieldTwo;
}
}
Or
public static class DomainTranslator
{
public static ServiceType Translate( ClientType type )
{
return new ServiceType { FieldOne = type.FieldOne, FieldTwo = type.FieldTwo };
}
}
I'm using a 3rd party's set of
webservices...
Assuming you can't modify the classes, I'm not aware of any way you can change the casting behavior. At least, no way that isn't far, far more complicated than just writing a CustomerToAppointmentPartyTranslator() mapping function... :)
Assuming you're on a recent version of C# (3.5, I believe?), this might be a good candidate for an extension method.
Have you looked at adding a conversion operator to one of the domain classes to define an explicit cast. See the msdn documentation here.
Enjoy!
A simple and very fast way of mapping the types is using the PropertyCopy<TTarget>.CopyFrom<TSource>(TSource source)
method from the MiscUtil library as described here:
using MiscUtil.Reflection;
class A
{
public int Foo { get; set; }
}
class B
{
public int Foo { get; set; }
}
class Program
{
static void Main()
{
A a = new A();
a.Foo = 17;
B b = PropertyCopy<B>.CopyFrom(a);
bool success = b.Foo == 17; // success is true;
}
}
Two classes with exactly the same signature, in two different namespaces, are two different classes. You will not be able to implicitly convert between them if they do not explicitly state how they can be converted from one to the other using implicit or explicit operators.
There are some things you may be able to do with serialization. WCF DataContract classes on one side do not have to be the exact same type as the DataContract on the other side; they just have to have the same signature and be decorated identically. If this is true for your two objects, you can use a DataContractSerializer to "convert" the types through their DataContract decoration.
If you have control over the implementation of one class or the other, you can also define an implicit or explicit operator that will define how the other class can be converted to yours. This will probably simply return a new reference of a deep copy of the other object in your type. Because this is the case, I would define it as explicit, to make sure the conversion is only performed when you NEED it (it will be used in cases when you explicitly cast, such as myAppCustomer = (Appointment.CustomerParty)myCustCustomer;).
Even if you don't control either class, you can write an extension method, or a third class, that will perform this conversion.